rowid,title,contents,year,author,author_slug,published,url,topic
51,Blow Your Own Trumpet,"Even if your own trumpet’s tiny and fell out of a Christmas cracker, blowing it isn’t something that everyone’s good at. Some people find selling themselves and what they do difficult. But, you know what? Boo hoo hoo. If you want people to buy something, the reality is you’d better get good at selling, especially if that something is you.
For web professionals, the best place to tell potential business customers or possible employers about what you do is on your own website. You can write what you want and how you want, but that doesn’t make knowing what to write any easier. As a matter of fact, writing for yourself often proves harder than writing for someone else.
I spent this autumn thinking about what I wanted to say about Stuff & Nonsense on the website we relaunched recently. While I did that, I spoke to other designers about how they struggled to write about their businesses.
If you struggle to write well, don’t worry. You’re not on your own. Here are five ways to hit the right notes when writing about yourself and your work.
Be genuine about who you are
I’ve known plenty of talented people who run a successful business pretty much single-handed. Somehow they still feel awkward presenting themselves as individuals. They wonder whether describing themselves as a company will give them extra credibility. They especially agonise over using “we” rather than “I” when describing what they do. These choices get harder when you’re a one-man band trading as a limited company or LLC business entity.
If you mainly work alone, don’t describe yourself as anything other than “I”. You might think that saying “we” makes you appear larger and will give you a better chance of landing bigger and better work, but the moment a prospective client asks, “How many people are you?” you’ll have some uncomfortable explaining to do. This will distract them from talking about your work and derail your sales process. There’s no need to be anything other than genuine about how you describe yourself. You should be proud to say “I” because working alone isn’t something that many people have the ability, business acumen or talent to do.
Explain what you actually do
How many people do precisely the same job as you? Hundreds? Thousands? The same goes for companies. If yours is a design studio, development team or UX consultancy, there are countless others saying exactly what you’re saying about what you do. Simply stating that you code, design or – God help me – “handcraft digital experiences” isn’t enough to make your business sound different from everyone else. Anyone can and usually does say that, but people buy more than deliverables. They buy something that’s unique about you and your business.
Potentially thousands of companies deliver code and designs the same way as Stuff & Nonsense, but our clients don’t just buy page designs, prototypes and websites from us. They buy our taste for typography, colour and layout, summed up by our “It’s the taste” tagline and bowler hat tip to the PG Tips chimps. We hope that potential clients will understand what’s unique about us. Think beyond your deliverables to what people actually buy, and sell the uniqueness of that.
Describe work in progress
It’s sad that current design trends have made it almost impossible to tell one website from another. So many designers now demonstrate finished responsive website designs by pasting them onto iMac, MacBook, iPad and iPhone screens that their portfolios don’t fare much better. Every designer brings their own experience, perspective and process to a project. In my experience, it’s understanding those differences which forms a big part of how a prospective client makes a decision about who to work with. Don’t simply show a prospective client the end result of a previous project; explain your process, the development of your thinking and even the wrong turns you took.
Traditional case studies, like the one I’ve just written about Stuff & Nonsense’s work for WWF UK, can take a lot of time. That’s probably why many portfolios get out of date very quickly. Designers make new work all the time, so there must be a better way to show more of it more often, to give prospective clients a clearer understanding of what we do. At Stuff & Nonsense our solution was to create a feed where we could post fragments of design work throughout a project. This also meant rewriting our Contract Killer to give us permission to publish work before someone signs it off.
Outline a client’s experience
Recently a client took me to one side and offered some valuable advice. She told me that our website hadn’t described anything about the experience she’d had while working with us. She said that knowing more about how we work would’ve helped her make her buying decision.
When a client chooses your business, they’re hoping for more than a successful outcome. They want their project to run smoothly. They want to feel that they made a correct decision when they chose you. If they work for an organisation, they’ll want their good judgement to be recognised too. Our client didn’t recognise her experience because we hadn’t made our own website part of it. Remember, the challenge of creating a memorable user experience starts with selling to the people paying you for it.
Address your ideal client
It’s important to understand that a portfolio’s job isn’t to document your work, it’s to attract new work from clients you want. Make sure that work you show reflects the work you want, because what you include in your portfolio often leads to more of the same.
When you’re writing for your portfolio and elsewhere on your website, imagine that you’re addressing your ideal client. Picture them sitting opposite and answer the questions they’d ask as you would in conversation. Be direct, funny if that’s appropriate and serious when it’s not. If it helps, ask a friend to read the questions aloud and record what you say in response. This will help make what you write sound natural. I’ve found this technique helps clients write copy too.
Toot your own horn
Some people confuse expressing confidence in yourself and your work as boastfulness, but in a competitive world the reality is that if you are to succeed, you need to show confidence so that others can show their confidence in you. If you want people to hear you, pick up your trumpet and blow it.",2015,Andy Clarke,andyclarke,2015-12-23T00:00:00+00:00,https://24ways.org/2015/blow-your-own-trumpet/,business
56,Helping VIPs Care About Performance,"Making a site feel super fast is the easy part of performance work. Getting people around you to care about site speed is a much bigger challenge. How do we keep the site fast beyond the initial performance work? Keeping very important people like your upper management or clients invested in performance work is critical to keeping a site fast and empowering other designers and developers to contribute.
The work to get others to care is so meaty that I dedicated a whole chapter to the topic in my book Designing for Performance. When I speak at conferences, the majority of questions during Q&A are on this topic. When I speak to developers and designers who care about performance, getting other people at one’s organization or agency to care becomes the most pressing question.
My primary response to folks who raise this issue is the question: “What metric(s) do your VIPs care about?” This is often met with blank stares and raised eyebrows. But it’s also our biggest clue to what we need to do to help empower others to care about performance and work on it. Every organization and executive is different. This means that three major things vary: the primary metrics VIPs care about; the language they use about measuring success; and how change is enacted. By clueing in to these nuances within your organization, you can get a huge leg up on crafting a successful pitch about performance work.
Let’s start with the metric that we should measure. Sure, (most) everybody cares about money - but is that really the metric that your VIPs are looking at each day to measure the success or efficacy of your site? More likely, dollars are the end game, but the metrics or key performance indicators (KPIs) people focus on might be:
rate of new accounts created/signups
cost of acquiring or retaining a customer
visitor return rate
visitor bounce rate
favoriting or another interaction rate
These are just a few examples, but they illustrate how wide-ranging the options are that people care about. I find that developers and designers haven’t necessarily investigated this when trying to get others to care about performance. We often reach for the obvious – money! – but if we don’t use the same kind of language our VIPs are using, we might not get too far. You need to know this before you can make the case for performance work.
To find out these metrics or KPIs, start reading through the emails your VIPs are sending within your company. What does it say on company wikis? Are there major dashboards internally that people are looking at where you could find some good metrics? Listen intently in team meetings or thoroughly read annual reports to see what these metrics could be.
The second key here is to pick up on language you can effectively copy and paste as you make the case for performance work. You need to be able to reflect back the metrics that people already find important in a way they’ll be able to hear. Once you know your key metrics, it’s time to figure out how to communicate with your VIPs about performance using language that will resonate with them.
Let’s start with visit traffic as an example metric that a very important person cares about. Start to dig up research that other people and companies have done that correlates performance and your KPI. For example, cite studies:
“When the home page of Google Maps was reduced from 100KB to 70–80KB, traffic went up 10% in the first week, and an additional 25% in the following three weeks.” (source).
Read through websites like WPOStats, which collects the spectrum of studies on the impact of performance optimization on user experience and business metrics. Tweet and see if others have done similar research that correlates performance and your site’s main KPI.
Once you have collected some research that touches on the same kind of language your VIPs use about the success of your site, it’s time to present it. You can start with something simple, like a qualitative description of the work you’re actively doing to improve the site that translates to improved metrics that your VIPs care about. It can be helpful to append a performance budget to any proposal so you can compare the budget to your site’s reality and how it might positively impact those KPIs folks care about.
Words and graphs are often only half the battle when it comes to getting others to care about performance. Often, videos appeal to folks’ emotions in a way that is missed when glancing through charts and graphs. On A List Apart I recently detailed how to create videos of how fast your site loads. Let’s say that your VIPs care about how your site loads on mobile devices; it’s time to show them how your site loads on mobile networks.
Open video
You can use these videos to make a number of different statements to your VIPs, depending on what they care about:
Look at how slow our site loads versus our competitor!
Look at how slow our site loads for users in another country!
Look at how slow our site loads on mobile networks!
Again, you really need to know which metrics your VIPs care about and tune into the language they’re using. If they don’t care about the overall user experience of your site on mobile devices, then showing them how slow your site loads on 3G isn’t going to work. This will be your sales pitch; you need to practice and iterate on the language and highlights that will land best with your audience.
To make your sales pitch as solid as possible, gut-check your ideas on how to present it with other co-workers to get their feedback. Read up on how to construct effective arguments and deliver them; do some research and see what others have done at your company when pitching to VIPs. Are slides effective? Memos or emails? Hallway conversations? Sometimes the best way to change people’s minds is by mentioning it in informal chats over coffee. Emulate the other leaders in your organization who are successful at this work.
Every organization and very important person is different. Learn what metrics folks truly care about, study the language that they use, and apply what you’ve learned in a way that’ll land with those individuals. It may take time to craft your pitch for performance work over time, but it’s important work to do. If you’re able to figure out how to mirror back the language and metrics VIPs care about, and connect the dots to performance for them, you will have a huge leg up on keeping your site fast in the long run.",2015,Lara Hogan,larahogan,2015-12-08T00:00:00+00:00,https://24ways.org/2015/helping-vips-care-about-performance/,business
60,What’s Ahead for Your Data in 2016?,"Who owns your data? Who decides what can you do with it? Where can you store it? What guarantee do you have over your data’s privacy? Where can you publish your work? Can you adapt software to accommodate your disability? Is your tiny agency subject to corporate regulation? Does another country have rights over your intellectual property?
If you aren’t the kind of person who is interested in international politics, I hate to break it to you: in 2016 the legal foundations which underpin our work on the web are being revisited in not one but three major international political agreements, and every single one of those questions is up for grabs. These agreements – the draft EU Data Protection Regulation (EUDPR), the Trans-Pacific Partnership (TPP), and the draft Transatlantic Trade and Investment Partnership (TTIP) – stand poised to have a major impact on your data, your workflows, and your digital rights. While some proposed changes could protect the open web for the future, other provisions would set the internet back several decades.
In this article we will review the issues you need to be aware of as a digital professional. While each of these agreements covers dozens of topics ranging from climate change to food safety, we will focus solely on the aspects which pertain to the work we do on the web.
The Trans-Pacific Partnership
The Trans-Pacific Partnership (TPP) is a free trade agreement between the US, Japan, Malaysia, Vietnam, Singapore, Brunei, Australia, New Zealand, Canada, Mexico, Chile and Peru – a bloc comprising 40% of the world’s economy. The agreement is expected to be signed by all parties, and thereby to come into effect, in 2016. This agreement is ostensibly about the bloc and its members working together for their common interests. However, the latest draft text of the TPP, which was formulated entirely in secret, has only been made publicly available on a Medium blog published by the U.S. Trade Representative which features a patriotic banner at the top proclaiming “TPP: Made in America.” The message sent about who holds the balance of power in this agreement, and whose interests it will benefit, is clear.
By far the most controversial area of the TPP has centred around the provisions on intellectual property. These include copyright terms of up to 120 years, mandatory takedowns of allegedly infringing content in response to just one complaint regardless of that complaint’s validity, heavy and disproportionate penalties for alleged violations, and – most frightening of all – government seizures of equipment allegedly used for copyright violations. All of these provisions have been raised without regard for the fact that a trade agreement is not the appropriate venue to negotiate intellectual property law.
Other draft TPP provisions would restrict the digital rights of people with disabilities by banning the workarounds they use every day. These include no exemptions for the adaptations of copywritten works for use in accessible technology (such as text-to-speech in ebook readers), a ban on circumventing DRM or digital locks in order to convert a file to an accessible format, and requiring the takedown of adapted works, such as a video with added subtitles, even if that adaptation would normally have fallen under the definition of fair use.
The e-commerce provisions would prohibit data localisation, the practice of requiring data to be physically stored on servers within a country’s borders. Data localisation is growing in popularity following the Snowden revelations, and some of your own personal data may have been recently “localised” in response to the Safe Harbor verdict. Prohibiting data localisation through the TPP would address the symptom but not the cause.
The Electronic Frontier Foundation has published an excellent summary of the digital rights issues raised by the agreement along with suggested actions American readers can take to speak out.
Transatlantic Trade and Investment Partnership
TTIP stands for the Transatlantic Trade and Investment Partnership, a draft free trade agreement between the United States and the EU. The plan has been hugely controversial and divisive, and the internet and digital provisions of the draft form just a small part of that contention.
The most striking digital provision of TTIP is an attempt to circumvent and override European data protection law. As EDRI, a European digital rights organisation, noted:
“the US proposal would authorise the transfer of EU citizens’ personal data to any country, trumping the EU data protection framework, which ensures that this data can only be transferred in clearly defined circumstances. For years, the US has been trying to bypass the default requirement for storage of personal data in the EU. It is therefore not surprising to see such a proposal being {introduced} in the context of the trade negotiations.”
This draft provision was written before the Safe Harbor data protection agreement between the EU and US was invalidated by the Court of Justice of the European Union. In other words, there is no longer any protective agreement in place, and our data is as vulnerable as this political situation. However, data protection is a matter of its own law, the acting Data Protection Directive and the draft EU Data Protection Reform. A trade agreement, be it the TTIP or the TPP, is not the appropriate place to revamp a law on data protection.
Other digital law issues raised by TTIP include the possibility of renegotiating standards on encryption (which in practice means lowering them) and renegotiating intellectual property rights such as copyright. The spectre of net neutrality has even put in an appearance, with an attempt to introduce rules on access to the internet itself being introduced as provisions.
TTIP is still under discussion, and this month the EU trade representative said that “we agreed to further intensify our work during 2016 to help negotiations move forward rapidly.” This has been cleverly worded: this means the agreement has little chance of being passed or coming into effect in 2016, which buys civil society more precious time to speak out.
The EU Data Protection Regulation
On 15 December 2015 the European Commission announced their agreement on the text of the draft General Data Protection Regulation. This law will replace its predecessor, the EU Data Protection Regulation of 1995, which has done a remarkable job of protecting data privacy across the continent throughout two decades of constant internet evolution.
The goal of the reform process has been to return power over data, and its uses, to citizens. Users will have more control over what data is captured about them, how it is used, how it is retained, and how it can be deleted. Businesses and digital professionals, in turn, will have to restructure their relationships with client and customer data. Compliance obligations will increase, and difficult choices will have to be made. However, this time should be seen as an opportunity to rethink our relationship with data. After Snowden, Schrems, and Safe Harbor, it is clear that we cannot go back to the way things were before. In an era of where every one of our heartbeats is recorded on a wearable device and uploaded to a surveilled data centre in another country, the need for reform has never been more acute.
While texts of the draft GDPR are available, there is not enough mulled wine in the world that will help you get through them. Instead, the law firm Fieldfisher Waterhouse has produced this helpful infographic which will give you a good idea of the changes we can expect to see (view full size):
The most surprising outcome announced on 15 December was the new regulation’s teeth. Under the new law, companies that fail to heed the updated data protection rules will face fines of up to 4% of their global turnover. Additionally, the law expands the liability for data protection to both the controller (the company hosting the data) and the data processor (the company using the data). The new law will also introduce a one-stop shop for resolving concerns over data misuse. Companies will no longer be able to headquarter their European operations in countries which are perceived to have relatively light-touch data protection enforcement (that means you, Ireland) as a means of automatically rejecting any complaints filed by citizens outside that country.
For digital professionals, the most immediate concern is analytics. In fact, I am going to make a prediction: in 2016 we will begin to see the same misguided war on analytics that we saw on cookies. By increasing the legal liabilities for both data processors and controllers – in other words, the company providing the analytics as well as the site administrator studying them – the new regulation risks creating disproportionate burdens as well as the same “guilt by association” risks we saw in 2012. There have already been statements made by some within the privacy community that analytics are tracking, and tracking is surveillance, therefore analytics are evil. Yet “just don’t use analytics,” as was suggested by one advocate, is simply not an option. European regulators should consult with the web community to gain a clear understanding of why analytics are vital to everyday site administrators, and must find a happy medium that protects users’ data without criminalising every website by default. No one wants a repeat of the crisis of consent, as well as the scaremongering, caused by the cookie law.
Assuming the text is adopted in 2016, the new EU Data Protection Regulation would not come into effect until 2018. We have a considerable challenge ahead, but we also have plenty of time to get it right.",2015,Heather Burns,heatherburns,2015-12-21T00:00:00+00:00,https://24ways.org/2015/whats-ahead-for-your-data-in-2016/,business
49,Universal React,"One of the libraries to receive a huge amount of focus in 2015 has been ReactJS, a library created by Facebook for building user interfaces and web applications.
More generally we’ve seen an even greater rise in the number of applications built primarily on the client side with most of the logic implemented in JavaScript. One of the main issues with building an app in this way is that you immediately forgo any customers who might browse with JavaScript turned off, and you can also miss out on any robots that might visit your site to crawl it (such as Google’s search bots). Additionally, we gain a performance improvement by being able to render from the server rather than having to wait for all the JavaScript to be loaded and executed.
The good news is that this problem has been recognised and it is possible to build a fully featured client-side application that can be rendered on the server. The way in which these apps work is as follows:
The user visits www.yoursite.com and the server executes your JavaScript to generate the HTML it needs to render the page.
In the background, the client-side JavaScript is executed and takes over the duty of rendering the page.
The next time a user clicks, rather than being sent to the server, the client-side app is in control.
If the user doesn’t have JavaScript enabled, each click on a link goes to the server and they get the server-rendered content again.
This means you can still provide a very quick and snappy experience for JavaScript users without having to abandon your non-JS users. We achieve this by writing JavaScript that can be executed on the server or on the client (you might have heard this referred to as isomorphic) and using a JavaScript framework that’s clever enough handle server- or client-side execution. Currently, ReactJS is leading the way here, although Ember and Angular are both working on solutions to this problem.
It’s worth noting that this tutorial assumes some familiarity with React in general, its syntax and concepts. If you’d like a refresher, the ReactJS docs are a good place to start.
Getting started
We’re going to create a tiny ReactJS application that will work on the server and the client. First we’ll need to create a new project and install some dependencies. In a new, blank directory, run:
npm init -y
npm install --save ejs express react react-router react-dom
That will create a new project and install our dependencies:
ejs is a templating engine that we’ll use to render our HTML on the server.
express is a small web framework we’ll run our server on.
react-router is a popular routing solution for React so our app can fully support and respect URLs.
react-dom is a small React library used for rendering React components.
We’re also going to write all our code in ECMAScript 6, and therefore need to install BabelJS and configure that too.
npm install --save-dev babel-cli babel-preset-es2015 babel-preset-react
Then, create a .babelrc file that contains the following:
{
""presets"": [""es2015"", ""react""]
}
What we’ve done here is install Babel’s command line interface (CLI) tool and configured it to transform our code from ECMAScript 6 (or ES2015) to ECMAScript 5, which is more widely supported. We’ll need the React transforms when we start writing JSX when working with React.
Creating a server
For now, our ExpressJS server is pretty straightforward. All we’ll do is render a view that says ‘Hello World’. Here’s our server code:
import express from 'express';
import http from 'http';
const app = express();
app.use(express.static('public'));
app.set('view engine', 'ejs');
app.get('*', (req, res) => {
res.render('index');
});
const server = http.createServer(app);
server.listen(3003);
server.on('listening', () => {
console.log('Listening on 3003');
});
Here we’re using ES6 modules, which I wrote about on 24 ways last year, if you’d like a reminder. We tell the app to render the index view on any GET request (that’s what app.get('*') means, the wildcard matches any route).
We now need to create the index view file, which Express expects to be defined in views/index.ejs:
My App
Hello World
Finally, we’re ready to run the server. Because we installed babel-cli earlier we have access to the babel-node executable, which will transform all your code before running it through node. Run this command:
./node_modules/.bin/babel-node server.js
And you should now be able to visit http://localhost:3003 and see ‘Hello World’ right there:
Building the React app
Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end. Our app will have two routes, / and /about which will both show a small amount of content. This will demonstrate how to use React Router on the server side to make sure our React app plays nicely with URLs.
Firstly, let’s update views/index.ejs. Our server will figure out what HTML it needs to render, and pass that into the view. We can pass a value into our view when we render it, and then use EJS syntax to tell it to output that data. Update the template file so the body looks like so:
<%- markup %>
Next, we’ll define the routes we want our app to have using React Router. For now we’ll just define the index route, and not worry about the /about route quite yet. We could define our routes in JSX, but I think for server-side rendering it’s clearer to define them as an object. Here’s what we’re starting with:
const routes = {
path: '',
component: AppComponent,
childRoutes: [
{
path: '/',
component: IndexComponent
}
]
}
These are just placed at the top of server.js, after the import statements. Later we’ll move these into a separate file, but for now they are fine where they are.
Notice how I define first that the AppComponent should be used at the '' path, which effectively means it matches every single route and becomes a container for all our other components. Then I give it a child route of /, which will match the IndexComponent. Before we hook these routes up with our server, let’s quickly define components/app.js and components/index.js. app.js looks like so:
import React from 'react';
export default class AppComponent extends React.Component {
render() {
return (
Welcome to my App
{ this.props.children }
);
}
}
When a React Router route has child components, they are given to us in the props under the children key, so we need to include them in the code we want to render for this component. The index.js component is pretty bland:
import React from 'react';
export default class IndexComponent extends React.Component {
render() {
return (
This is the index page
);
}
}
Server-side routing with React Router
Head back into server.js, and firstly we’ll need to add some new imports:
import React from 'react';
import { renderToString } from 'react-dom/server';
import { match, RoutingContext } from 'react-router';
import AppComponent from './components/app';
import IndexComponent from './components/index';
The ReactDOM package provides react-dom/server which includes a renderToString method that takes a React component and produces the HTML string output of the component. It’s this method that we’ll use to render the HTML from the server, generated by React. From the React Router package we use match, a function used to find a matching route for a URL; and RoutingContext, a React component provided by React Router that we’ll need to render. This wraps up our components and provides some functionality that ties React Router together with our app. Generally you don’t need to concern yourself about how this component works, so don’t worry too much.
Now for the good bit: we can update our app.get('*') route with the code that matches the URL against the React routes:
app.get('*', (req, res) => {
// routes is our object of React routes defined above
match({ routes, location: req.url }, (err, redirectLocation, props) => {
if (err) {
// something went badly wrong, so 500 with a message
res.status(500).send(err.message);
} else if (redirectLocation) {
// we matched a ReactRouter redirect, so redirect from the server
res.redirect(302, redirectLocation.pathname + redirectLocation.search);
} else if (props) {
// if we got props, that means we found a valid component to render
// for the given route
const markup = renderToString();
// render `index.ejs`, but pass in the markup we want it to display
res.render('index', { markup })
} else {
// no route match, so 404. In a real app you might render a custom
// 404 view here
res.sendStatus(404);
}
});
});
We call match, giving it the routes object we defined earlier and req.url, which contains the URL of the request. It calls a callback function we give it, with err, redirectLocation and props as the arguments. The first two conditionals in the callback function just deal with an error occuring or a redirect (React Router has built in redirect support). The most interesting bit is the third conditional, else if (props). If we got given props and we’ve made it this far it means we found a matching component to render and we can use this code to render it:
...
} else if (props) {
// if we got props, that means we found a valid component to render
// for the given route
const markup = renderToString();
// render `index.ejs`, but pass in the markup we want it to display
res.render('index', { markup })
} else {
...
}
The renderToString method from ReactDOM takes that RoutingContext component we mentioned earlier and renders it with the properties required. Again, you need not concern yourself with what this specific component does or what the props are. Most of this is data that React Router provides for us on top of our components.
Note the {...props}, which is a neat bit of JSX syntax that spreads out our object into key value properties. To see this better, note the two pieces of JSX code below, both of which are equivalent:
// OR:
const props = { a: ""foo"", b: ""bar"" };
Running the server again
I know that felt like a lot of work, but the good news is that once you’ve set this up you are free to focus on building your React components, safe in the knowledge that your server-side rendering is working. To check, restart the server and head to http://localhost:3003 once more. You should see it all working!
Refactoring and one more route
Before we move on to getting this code running on the client, let’s add one more route and do some tidying up. First, move our routes object out into routes.js:
import AppComponent from './components/app';
import IndexComponent from './components/index';
const routes = {
path: '',
component: AppComponent,
childRoutes: [
{
path: '/',
component: IndexComponent
}
]
}
export { routes };
And then update server.js. You can remove the two component imports and replace them with:
import { routes } from './routes';
Finally, let’s add one more route for ./about and links between them. Create components/about.js:
import React from 'react';
export default class AboutComponent extends React.Component {
render() {
return (
A little bit about me.
);
}
}
And then you can add it to routes.js too:
import AppComponent from './components/app';
import IndexComponent from './components/index';
import AboutComponent from './components/about';
const routes = {
path: '',
component: AppComponent,
childRoutes: [
{
path: '/',
component: IndexComponent
},
{
path: '/about',
component: AboutComponent
}
]
}
export { routes };
If you now restart the server and head to http://localhost:3003/about` you’ll see the about page!
For the finishing touch we’ll use the React Router link component to add some links between the pages. Edit components/app.js to look like so:
import React from 'react';
import { Link } from 'react-router';
export default class AppComponent extends React.Component {
render() {
return (
Welcome to my App
Home
About
{ this.props.children }
);
}
}
You can now click between the pages to navigate. However, everytime we do so the requests hit the server. Now we’re going to make our final change, such that after the app has been rendered on the server once, it gets rendered and managed in the client, providing that snappy client-side app experience.
Client-side rendering
First, we’re going to make a small change to views/index.ejs. React doesn’t like rendering directly into the body and will give a warning when you do so. To prevent this we’ll wrap our app in a div:
<%- markup %>
I’ve also added in a script tag to build.js, which is the file we’ll generate containing all our client-side code.
Next, create client-render.js. This is going to be the only bit of JavaScript that’s exclusive to the client side. In it we need to pull in our routes and render them to the DOM.
import React from 'react';
import ReactDOM from 'react-dom';
import { Router } from 'react-router';
import { routes } from './routes';
import createBrowserHistory from 'history/lib/createBrowserHistory';
ReactDOM.render(
,
document.getElementById('app')
)
The first thing you might notice is the mention of createBrowserHistory. React Router is built on top of the history module, a module that listens to the browser’s address bar and parses the new location. It has many modes of operation: it can keep track using a hashbang, such as http://localhost/#!/about (this is the default), or you can tell it to use the HTML5 history API by calling createBrowserHistory, which is what we’ve done. This will keep the URLs nice and neat and make sure the client and the server are using the same URL structure. You can read more about React Router and histories in the React Router documentation.
Finally we use ReactDOM.render and give it the Router component, telling it about all our routes, and also tell ReactDOM where to render, the #app element.
Generating build.js
We’re actually almost there! The final thing we need to do is generate our client side bundle. For this we’re going to use webpack, a module bundler that can take our application, follow all the imports and generate one large bundle from them. We’ll install it and babel-loader, a webpack plugin for transforming code through Babel.
npm install --save-dev webpack babel-loader
To run webpack we just need to create a configuration file, called webpack.config.js. Create the file in the root of our application and add the following code:
var path = require('path');
module.exports = {
entry: path.join(process.cwd(), 'client-render.js'),
output: {
path: './public/',
filename: 'build.js'
},
module: {
loaders: [
{
test: /.js$/,
loader: 'babel'
}
]
}
}
Note first that this file can’t be written in ES6 as it doesn’t get transformed. The first thing we do is tell webpack the main entry point for our application, which is client-render.js. We use process.cwd() because webpack expects an exact location – if we just gave it the string ‘client-render.js’, webpack wouldn’t be able to find it.
Next, we tell webpack where to output our file, and here I’m telling it to place the file in public/build.js. Finally we tell webpack that every time it hits a file that ends in .js, it should use the babel-loader plugin to transform the code first.
Now we’re ready to generate the bundle!
./node_modules/.bin/webpack
This will take a fair few seconds to run (on my machine it’s about seven or eight), but once it has it will have created public/build.js, a client-side bundle of our application. If you restart your server once more you’ll see that we can now navigate around our application without hitting the server, because React on the client takes over. Perfect!
The first bundle that webpack generates is pretty slow, but if you run webpack -w it will go into watch mode, where it watches files for changes and regenerates the bundle. The key thing is that it only regenerates the small pieces of the bundle it needs, so while the first bundle is very slow, the rest are lightning fast. I recommend leaving webpack constantly running in watch mode when you’re developing.
Conclusions
First, if you’d like to look through this code yourself you can find it all on GitHub. Feel free to raise an issue there or tweet me if you have any problems or would like to ask further questions.
Next, I want to stress that you shouldn’t use this as an excuse to build all your apps in this way. Some of you might be wondering whether a static site like the one we built today is worth its complexity, and you’d be right. I used it as it’s an easy example to work with but in the future you should carefully consider your reasons for wanting to build a universal React application and make sure it’s a suitable infrastructure for you.
With that, all that’s left for me to do is wish you a very merry Christmas and best of luck with your React applications!",2015,Jack Franklin,jackfranklin,2015-12-05T00:00:00+00:00,https://24ways.org/2015/universal-react/,code
52,Git Rebasing: An Elfin Workshop Workflow,"This year Santa’s helpers have been tasked with making a garland. It’s a pretty simple task: string beads onto yarn in a specific order. When the garland reaches a specific length, add it to the main workshop garland. Each elf has a specific sequence they’re supposed to chain, which is given to them via a work order. (This is starting to sound like one of those horrible calculus problems. I promise it isn’t. It’s worse; it’s about Git.)
For the most part, the system works really well. The elves are able to quickly build up a shared chain because each elf specialises on their own bit of garland, and then links the garland together. Because of this they’re able to work independently, but towards the common goal of making a beautiful garland.
At first the elves are really careful with each bead they put onto the garland. They check with one another before merging their work, and review each new link carefully. As time crunches on, the elves pour a little more cheer into the eggnog cooler, and the quality of work starts to degrade. Tensions rise as mistakes are made and unkind words are said. The elves quickly realise they’re going to need a system to change the beads out when mistakes are made in the chain.
The first common mistake is not looking to see what the latest chain is that’s been added to the main garland. The garland is huge, and it sits on a roll in one of the corners of the workshop. It’s a big workshop, so it is incredibly impractical to walk all the way to the roll to check what the last link is on the chain. The elves, being magical, have set up a monitoring system that allows them to keep a local copy of the main garland at their workstation. It’s an imperfect system though, so the elves have to request a manual refresh to see the latest copy. They can request a new copy by running the command
git pull --rebase=preserve
(They found that if they ran git pull on its own, they ended up with weird loops of extra beads off the main garland, so they’ve opted to use this method.) This keeps the shared garland up to date, which makes things a lot easier. A visualisation of the rebase process is available.
The next thing the elves noticed is that if they worked on the main workshop garland, they were always running into problems when they tried to share their work back with the rest of the workshop. It was fine if they were working late at night by themselves, but in the middle of the day, it was horrible. (I’ve been asked not to talk about that time the fight broke out.) Instead of trying to share everything on their local copy of the main garland, the elves have realised it’s a lot easier to work on a new string and then knot this onto the main garland when their pattern repeat is finished. They generate a new string by issuing the following commands:
git checkout master
git checkout -b 1234_pattern-name
1234 represents the work order number and pattern-name describes the pattern they’re adding. Each bead is then added to the new link (git add bead.txt) and locked into place (git commit). Each elf repeats this process until the sequence of beads described in the work order has been added to their mini garland.
To combine their work with the main garland, the elves need to make a few decisions. If they’re making a single strand, they issue the following commands:
git checkout master
git merge --ff-only 1234_pattern-name
To share their work they publish the new version of the main garland to the workshop spool with the command git push origin master.
Sometimes this fails. Sharing work fails because the workshop spool has gotten new links added since the elf last updated their copy of the main workshop spool. This makes the elves both happy and sad. It makes them happy because it means the other elves have been working too, but it makes them sad because they now need to do a bit of extra work to close their work order.
To update the local copy of the workshop spool, the elf first unlinks the chain they just linked by running the command:
git reset --merge ORIG_HEAD
This works because the garland magic notices when the elves are doing a particularly dangerous thing and places a temporary, invisible bookmark to the last safe bead in the chain before the dangerous thing happened. The garland no longer has the elf’s work, and can be updated safely. The elf runs the command git pull --rebase=preserve and the changes all the other elves have made are applied locally.
With these new beads in place, the elf now has to restring their own chain so that it starts at the right place. To do this, the elf turns back to their own chain (git checkout 1234_pattern-name) and runs the command git rebase master. Assuming their bead pattern is completely unique, the process will run and the elf’s beads will be restrung on the tip of the main workshop garland.
Sometimes the magic fails and the elf has to deal with merge conflicts. These are kind of annoying, so the elf uses a special inspector tool to figure things out. The elf opens the inspector by running the command git mergetool to work through places where their beads have been added at the same points as another elf’s beads. Once all the conflicts are resolved, the elf saves their work, and quits the inspector. They might need to do this a few times if there are a lot of new beads, so the elf has learned to follow this update process regularly instead of just waiting until they’re ready to close out their work order.
Once their link is up to date, the elf can now reapply their chain as before, publish their work to the main workshop garland, and close their work order:
git checkout master
git merge --ff-only 1234_pattern-name
git push origin master
Generally this process works well for the elves. Sometimes, though, when they’re tired or bored or a little drunk on festive cheer, they realise there’s a mistake in their chain of beads. Fortunately they can fix the beads without anyone else knowing. These tools can be applied to the whole workshop chain as well, but it causes problems because the magic assumes that elves are only ever adding to the main chain, not removing or reordering beads on the fly. Depending on where the mistake is, the elf has a few different options.
Let’s pretend the elf has a sequence of five beads she’s been working on. The work order says the pattern should be red-blue-red-blue-red.
If the sequence of beads is wrong (for example, blue-blue-red-red-red), the elf can remove the beads from the chain, but keep the beads in her workstation using the command git reset --soft HEAD~5.
If she’s been using the wrong colours and the wrong pattern (for example, green-green-yellow-yellow-green), she can remove the beads from her chain and discard them from her workstation using the command git reset --hard HEAD~5.
If one of the beads is missing (for example, red-blue-blue-red), she can restring the beads using the first method, or she can use a bit of magic to add the missing bead into the sequence.
Using a tool that’s a bit like orthoscopic surgery, she first selects a sequence of beads which contains the problem. A visualisation of this process is available.
Start the garland surgery process with the command:
git rebase --interactive HEAD~4
A new screen comes up with the following information (the oldest bead is on top):
pick c2e4877 Red bead
pick 9b5555e Blue bead
pick 7afd66b Blue bead
pick e1f2537 Red bead
The elf adjusts the list, changing “pick” to “edit” next to the first blue bead:
pick c2e4877 Red bead
edit 9b5555e Blue bead
pick 7afd66b Blue bead
pick e1f2537 Red bead
She then saves her work and quits the editor. The garland magic has placed her back in time at the moment just after she added the first blue bead.
She needs to manually fix up her garland to add the new red bead. If the beads were files, she might run commands like vim beads.txt and edit the file to make the necessary changes.
Once she’s finished her changes, she needs to add her new bead to the garland (git add --all) and lock it into place (git commit). This time she assigns the commit message “Red bead – added” so she can easily find it.
The garland magic has replaced the bead, but she still needs to verify the remaining beads on the garland. This is a mostly automatic process which is started by running the command git rebase --continue.
The new red bead has been assigned a position formerly held by the blue bead, and so the elf must deal with a merge conflict. She opens up a new program to help resolve the conflict by running git mergetool.
She knows she wants both of these beads in place, so the elf edits the file to include both the red and blue beads.
With the conflict resolved, the elf saves her changes and quits the mergetool.
Back at the command line, the elf checks the status of her work using the command git status.
rebase in progress; onto 4a9cb9d
You are currently rebasing branch '2_RBRBR' on '4a9cb9d'.
(all conflicts fixed: run ""git rebase --continue"")
Changes to be committed:
(use ""git reset HEAD ..."" to unstage)
modified: beads.txt
Untracked files:
(use ""git add ..."" to include in what will be committed)
beads.txt.orig
She removes the file added by the mergetool with the command rm beads.txt.orig and commits the edits she just made to the bead file using the commands:
git add beads.txt
git commit --message ""Blue bead -- resolved conflict""
With the conflict resolved, the elf is able to continue with the rebasing process using the command git rebase --continue. There is one final conflict the elf needs to resolve. Once again, she opens up the visualisation tool and takes a look at the two conflicting files.
She incorporates the changes from the left and right column to ensure her bead sequence is correct.
Once the merge conflict is resolved, the elf saves the file and quits the mergetool. Once again, she cleans out the backup file added by the mergetool (rm beads.txt.orig) and commits her changes to the garland:
git add beads.txt
git commit --message ""Red bead -- resolved conflict""
and then runs the final verification steps in the rebase process (git rebase --continue).
The verification process runs through to the end, and the elf checks her work using the command git log --oneline.
9269914 Red bead -- resolved conflict
4916353 Blue bead -- resolved conflict
aef0d5c Red bead -- added
9b5555e Blue bead
c2e4877 Red bead
She knows she needs to read the sequence from bottom to top (the oldest bead is on the bottom). Reviewing the list she sees that the sequence is now correct.
Sometimes, late at night, the elf makes new copies of the workshop garland so she can play around with the bead sequencer just to see what happens. It’s made her more confident at restringing beads when she’s found real mistakes. And she doesn’t mind helping her fellow elves when they run into trouble with their beads. The sugar cookies they leave her as thanks don’t hurt either. If you would also like to play with the bead sequencer, you can get a copy of the branches the elf worked.
Our lessons from the workshop:
By using rebase to update your branches, you avoid merge commits and keep a clean commit history.
If you make a mistake on one of your local branches, you can use reset to take commits off your branch. If you want to save the work, but uncommit it, add the parameter --soft. If you want to completely discard the work, use the parameter, --hard.
If you have merged working branch changes to the local copy of your master branch and it is preventing you from pushing your work to a remote repository, remove these changes using the command reset with the parameter --merge ORIG_HEAD before updating your local copy of the remote master branch.
If you want to make a change to work that was committed a little while ago, you can use the command rebase with the parameter --interactive. You will need to include how many commits back in time you want to review.",2015,Emma Jane Westby,emmajanewestby,2015-12-07T00:00:00+00:00,https://24ways.org/2015/git-rebasing/,code
54,Putting My Patterns through Their Paces,"Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work.
The thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me.
Meet the Teaser
Here’s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.)
In its simplest form, a teaser contains a headline, which links to an article:
Fairly straightforward, sure. But it’s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types – some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile.
Nearly every element visible on this page is built out of our generic “teaser” pattern.
But the teaser variation I’d like to call out is the one that appears on The Toast’s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline.
The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts.
And this is, as it happens, the teaser variation that gave me pause. Back in the old days – you know, like six months ago – I probably would’ve marked this module up to match the design. In other words, I would’ve looked at the module’s visual hierarchy (metadata up top, headline and content below) and written the following HTML:
But then I caught myself, and realized this wasn’t the best approach.
Moving Beyond Layout
Since I’ve started working responsively, there’s a question I work into every step of my design process. Whether I’m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself:
What if someone doesn’t browse the web like I do?
…Okay, that doesn’t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it’s been invaluable to so many aspects of my practice. If I’m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I’m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It’s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web.
And that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it’d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they’re associated.
That’s why I find it’s helpful to begin designing a pattern’s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I’m trying to support. In other words, if someone’s encountering my design without the CSS I’ve written, what should their experience be?
So I took a step back, and came up with a different approach:
Lorem ipsum dolor sit amet, consectetur…
126 comments
Much, much better. This felt like a better match for the content I was designing: the headline – easily most important element – was at the top, followed by the author’s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that’s why it’s at the very end of the excerpt, the last element within our teaser. And with some light styling, we’ve got a respectable-looking hierarchy in place:
Yeah, you’re right – it’s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we’ll bolster the markup with an extra element around our title and byline:
With that in place, we can use flexbox to tweak our layout, like so:
.teaser-hed {
display: flex;
flex-direction: column-reverse;
}
flex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children.
Getting closer! But as great as flexbox is, it doesn’t do anything for elements outside our container, like our little comment count, which is, as you’ve probably noticed, still stranded at the very bottom of our teaser.
Flexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser’s using a sufficiently modern version of flexbox. Here’s the one we used:
var doc = document.body || document.documentElement;
var style = doc.style;
if ( style.webkitFlexWrap == '' ||
style.msFlexWrap == '' ||
style.flexWrap == '' ) {
doc.className += "" supports-flex"";
}
Eagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser’s design:
.supports-flex .teaser-hed {
display: flex;
flex-direction: column-reverse;
}
.supports-flex .teaser .comment-count {
position: absolute;
right: 0;
top: 1.1em;
}
If the supports-flex class is present, we can apply our flexbox layout to the title area, sure – but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don’t meet our threshold for our advanced styles are left with an attractive design that matches our HTML’s content hierarchy; but the ones that pass our test receive the finished, final design.
And with that, our teaser’s complete.
Diving Into Device-Agnostic Design
This is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I’d recommend Heydon Pickering’s “Flexbox Grid Finesse”, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you’d be right! In fact, it’s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I’ve found that thinking about my design as existing in broad experience tiers – in layers – is one of the best ways of designing for the modern web. And what’s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well.
Open video
Even a simple search form can be conditionally enhanced, given a little layered thinking.
This more layered approach to interface design isn’t a new one, mind you: it’s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I’ve found to practice responsive design. As Trent Walton once wrote,
Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability.
We have a weird job, working on the web. We’re designing for the latest mobile devices, sure, but we’re increasingly aware that our definition of “smartphone” is much too narrow. Browsers have started appearing on our wrists and in our cars’ dashboards, but much of the world’s mobile data flows over sub-3G networks. After all, the web’s evolution has never been charted along a straight line: it’s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don’t yet know about, a more device-agnostic, more layered design process can better prepare our patterns – and ourselves – for the future.
(It won’t help you get enough to eat at holiday parties, though.)",2015,Ethan Marcotte,ethanmarcotte,2015-12-10T00:00:00+00:00,https://24ways.org/2015/putting-my-patterns-through-their-paces/,code
55,How Tabs Should Work,"Tabs in browsers (not browser tabs) are one of the oldest custom UI elements in a browser that I can think of. They’ve been done to death. But, sadly, most of the time I come across them, the tabs have been badly, or rather partially, implemented.
So this post is my definition of how a tabbing system should work, and one approach of implementing that.
But… tabs are easy, right?
I’ve been writing code for tabbing systems in JavaScript for coming up on a decade, and at one point I was pretty proud of how small I could make the JavaScript for the tabbing system:
var tabs = $('.tab').click(function () {
tabs.hide().filter(this.hash).show();
}).map(function () {
return $(this.hash)[0];
});
$('.tab:first').click();
Simple, right? Nearly fits in a tweet (ignoring the whole jQuery library…). Still, it’s riddled with problems that make it a far from perfect solution.
Requirements: what makes the perfect tab?
All content is navigable and available without JavaScript (crawler-compatible and low JS-compatible).
ARIA roles.
The tabs are anchor links that:
are clickable
have block layout
have their href pointing to the id of the panel element
use the correct cursor (i.e. cursor: pointer).
Since tabs are clickable, the user can open in a new tab/window and the page correctly loads with the correct tab open.
Right-clicking (and Shift-clicking) doesn’t cause the tab to be selected.
Native browser Back/Forward button correctly changes the state of the selected tab (think about it working exactly as if there were no JavaScript in place).
The first three points are all to do with the semantics of the markup and how the markup has been styled. I think it’s easy to do a good job by thinking of tabs as links, and not as some part of an application. Links are navigable, and they should work the same way other links on the page work.
The last three points are JavaScript problems. Let’s investigate that.
The shitmus test
Like a litmus test, here’s a couple of quick ways you can tell if a tabbing system is poorly implemented:
Change tab, then use the Back button (or keyboard shortcut) and it breaks
The tab isn’t a link, so you can’t open it in a new tab
These two basic things are, to me, the bare minimum that a tabbing system should have.
Why is this important?
The people who push their so-called native apps on users can’t have more reasons why the web sucks. If something as basic as a tab doesn’t work, obviously there’s more ammo to push a closed native app or platform on your users.
If you’re going to be a web developer, one of your responsibilities is to maintain established interactivity paradigms. This doesn’t mean don’t innovate. But it does mean: stop fucking up my scrolling experience with your poorly executed scroll effects. :breath:
URI fragment, absolute URL or query string?
A URI fragment (AKA the # hash bit) would be using mysite.com/config#content to show the content panel. A fully addressable URL would be mysite.com/config/content. Using a query string (by way of filtering the page): mysite.com/config?tab=content.
This decision really depends on the context of your tabbing system. For something like GitHub’s tabs to view a pull request, it makes sense that the full URL changes.
For our problem though, I want to solve the issue when the page doesn’t do a full URL update; that is, your regular run-of-the-mill tabbing system.
I used to be from the school of using the hash to show the correct tab, but I’ve recently been exploring whether the query string can be used. The biggest reason is that multiple hashes don’t work, and comma-separated hash fragments don’t make any sense to control multiple tabs (since it doesn’t actually link to anything).
For this article, I’ll keep focused on using a single tabbing system and a hash on the URL to control the tabs.
Markup
I’m going to assume subcontent, so my markup would look like this (yes, this is a cat demo…):
It’s important to note that in the markup the link used for an individual tab references its panel content using the hash, pointing to the id on the panel. This will allow our content to connect up without JavaScript and give us a bunch of features for free, which we’ll see once we’re on to writing the code.
URL-driven tabbing systems
Instead of making the code responsive to the user’s input, we’re going to exclusively use the browser URL and the hashchange event on the window to drive this tabbing system. This way we get Back button support for free.
With that in mind, let’s start building up our code. I’ll assume we have the jQuery library, but I’ve also provided the full code working without a library (vanilla, if you will), but it depends on relatively new (polyfillable) tech like classList and dataset (which generally have IE10 and all other browser support).
Note that I’ll start with the simplest solution, and I’ll refactor the code as I go along, like in places where I keep calling jQuery selectors.
function show(id) {
// remove the selected class from the tabs,
// and add it back to the one the user selected
$('.tab').removeClass('selected').filter(function () {
return (this.hash === id);
}).addClass('selected');
// now hide all the panels, then filter to
// the one we're interested in, and show it
$('.panel').hide().filter(id).show();
}
$(window).on('hashchange', function () {
show(location.hash);
});
// initialise by showing the first panel
show('#dizzy');
This works pretty well for such little code. Notice that we don’t have any click handlers for the user and the Back button works right out of the box.
However, there’s a number of problems we need to fix:
The initialised tab is hard-coded to the first panel, rather than what’s on the URL.
If there’s no hash on the URL, all the panels are hidden (and thus broken).
If you scroll to the bottom of the example, you’ll find a “top” link; clicking that will break our tabbing system.
I’ve purposely made the page long, so that when you click on a tab, you’ll see the page scrolls to the top of the tab. Not a huge deal, but a bit annoying.
From our criteria at the start of this post, we’ve already solved items 4 and 5. Not a terrible start. Let’s solve items 1 through 3 next.
Using the URL to initialise correctly and protect from breakage
Instead of arbitrarily picking the first panel from our collection, the code should read the current location.hash and use that if it’s available.
The problem is: what if the hash on the URL isn’t actually for a tab?
The solution here is that we need to cache a list of known panel IDs. In fact, well-written DOM scripting won’t continuously search the DOM for nodes. That is, when the show function kept calling $('.tab').each(...) it was wasteful. The result of $('.tab') should be cached.
So now the code will collect all the tabs, then find the related panels from those tabs, and we’ll use that list to double the values we give the show function (during initialisation, for instance).
// collect all the tabs
var tabs = $('.tab');
// get an array of the panel ids (from the anchor hash)
var targets = tabs.map(function () {
return this.hash;
}).get();
// use those ids to get a jQuery collection of panels
var panels = $(targets.join(','));
function show(id) {
// if no value was given, let's take the first panel
if (!id) {
id = targets[0];
}
// remove the selected class from the tabs,
// and add it back to the one the user selected
tabs.removeClass('selected').filter(function () {
return (this.hash === id);
}).addClass('selected');
// now hide all the panels, then filter to
// the one we're interested in, and show it
panels.hide().filter(id).show();
}
$(window).on('hashchange', function () {
var hash = location.hash;
if (targets.indexOf(hash) !== -1) {
show(hash);
}
});
// initialise
show(targets.indexOf(location.hash) !== -1 ? location.hash : '');
The core of working out which tab to initialise with is solved in that last line: is there a location.hash? Is it in our list of valid targets (panels)? If so, select that tab.
The second breakage we saw in the original demo was that clicking the “top” link would break our tabs. This was due to the hashchange event firing and the code didn’t validate the hash that was passed. Now this happens, the panels don’t break.
So far we’ve got a tabbing system that:
Works without JavaScript.
Supports right-click and Shift-click (and doesn’t select in these cases).
Loads the correct panel if you start with a hash.
Supports native browser navigation.
Supports the keyboard.
The only annoying problem we have now is that the page jumps when a tab is selected. That’s due to the browser following the default behaviour of an internal link on the page. To solve this, things are going to get a little hairy, but it’s all for a good cause.
Removing the jump to tab
You’d be forgiven for thinking you just need to hook a click handler and return false. It’s what I started with. Only that’s not the solution. If we add the click handler, it breaks all the right-click and Shift-click support.
There may be another way to solve this, but what follows is the way I found – and it works. It’s just a bit… hairy, as I said.
We’re going to strip the id attribute off the target panel when the user tries to navigate to it, and then put it back on once the show code starts to run. This change will mean the browser has nowhere to navigate to for that moment, and won’t jump the page.
The change involves the following:
Add a click handle that removes the id from the target panel, and cache this in a target variable that we’ll use later in hashchange (see point 4).
In the same click handler, set the location.hash to the current link’s hash. This is important because it forces a hashchange event regardless of whether the URL actually changed, which prevents the tabs breaking (try it yourself by removing this line).
For each panel, put a backup copy of the id attribute in a data property (I’ve called it old-id).
When the hashchange event fires, if we have a target value, let’s put the id back on the panel.
These changes result in this final code:
/*global $*/
// a temp value to cache *what* we're about to show
var target = null;
// collect all the tabs
var tabs = $('.tab').on('click', function () {
target = $(this.hash).removeAttr('id');
// if the URL isn't going to change, then hashchange
// event doesn't fire, so we trigger the update manually
if (location.hash === this.hash) {
// but this has to happen after the DOM update has
// completed, so we wrap it in a setTimeout 0
setTimeout(update, 0);
}
});
// get an array of the panel ids (from the anchor hash)
var targets = tabs.map(function () {
return this.hash;
}).get();
// use those ids to get a jQuery collection of panels
var panels = $(targets.join(',')).each(function () {
// keep a copy of what the original el.id was
$(this).data('old-id', this.id);
});
function update() {
if (target) {
target.attr('id', target.data('old-id'));
target = null;
}
var hash = window.location.hash;
if (targets.indexOf(hash) !== -1) {
show(hash);
}
}
function show(id) {
// if no value was given, let's take the first panel
if (!id) {
id = targets[0];
}
// remove the selected class from the tabs,
// and add it back to the one the user selected
tabs.removeClass('selected').filter(function () {
return (this.hash === id);
}).addClass('selected');
// now hide all the panels, then filter to
// the one we're interested in, and show it
panels.hide().filter(id).show();
}
$(window).on('hashchange', update);
// initialise
if (targets.indexOf(window.location.hash) !== -1) {
update();
} else {
show();
}
This version now meets all the criteria I mentioned in my original list, except for the ARIA roles and accessibility. Getting this support is actually very cheap to add.
ARIA roles
This article on ARIA tabs made it very easy to get the tabbing system working as I wanted.
The tasks were simple:
Add aria-role set to tab for the tabs, and tabpanel for the panels.
Set aria-controls on the tabs to point to their related panel (by id).
I use JavaScript to add tabindex=0 to all the tab elements.
When I add the selected class to the tab, I also set aria-selected to true and, inversely, when I remove the selected class I set aria-selected to false.
When I hide the panels I add aria-hidden=true, and when I show the specific panel I set aria-hidden=false.
And that’s it. Very small changes to get full sign-off that the tabbing system is bulletproof and accessible.
Check out the final version (and the non-jQuery version as promised).
In conclusion
There’s a lot of tab implementations out there, but there’s an equal amount that break the browsing paradigm and the simple linkability of content. Clearly there’s a special hell for those tab systems that don’t even use links, but I think it’s clear that even in something that’s relatively simple, it’s the small details that make or break the user experience.
Obviously there are corners I’ve not explored, like when there’s more than one set of tabs on a page, and equally whether you should deliver the initial markup with the correct tab selected. I think the answer lies in using query strings in combination with hashes on the URL, but maybe that’s for another year!",2015,Remy Sharp,remysharp,2015-12-22T00:00:00+00:00,https://24ways.org/2015/how-tabs-should-work/,code
63,Be Fluid with Your Design Skills: Build Your Own Sites,"Just five years ago in 2010, when we were all busy trying to surprise and delight, learning CSS3 and trying to get whole websites onto one page, we had a poster on our studio wall. It was entitled ‘Designers Vs Developers’, an infographic that showed us the differences between the men(!) who created websites.
Designers wore skinny jeans and used Macs and developers wore cargo pants and brought their own keyboards to work. We began to learn that designers and developers were not only doing completely different jobs but were completely different people in every way. This opinion was backed up by hundreds of memes, millions of tweets and pages of articles which used words like void and battle and versus.
Thankfully, things move quickly in this industry; the wide world of web design has moved on in the last five years. There are new devices, technologies, tools – and even a few women. Designers have been helped along by great apps, software, open source projects, conferences, and a community of people who, to my unending pride, love to share their knowledge and their work.
So the world has moved on, and if Miley Cyrus, Ruby Rose and Eliot Sumner are identifying as gender fluid (an identity which refers to a gender which varies over time or is a combination of identities), then I would like to come out as discipline fluid!
OK, I will probably never identify as a developer, but I will identify as fluid! How can we be anything else in an industry that moves so quickly? That’s how we should think of our skills, our interests and even our job titles. After all, Steve Jobs told us that “Design is not just what it looks like and feels like. Design is how it works.” Sorry skinny-jean-wearing designers – this means we’re all designing something together. And it’s not just about knowing the right words to use: you have to know how it feels. How it feels when you make something work, when you fix that bug, when you make it work on IE.
Like anything in life, things run smoothly when you make the effort to share experiences, empathise and deeply understand the needs of others. How can designers do that if they’ve never built their own site? I’m not talking the big stuff, I’m talking about your portfolio site, your mate’s business website, a website for that great idea you’ve had. I’m talking about doing it yourself to get an unique insight into how it feels.
We all know that designers and developers alike love an , so here it is.
Ten reasons designers should be fluid with their skills and build their own sites
1. It’s never been easier
Now here’s where the definition of ‘build’ is going to get a bit loose and people are going to get angry, but when I say it’s never been easier I mean because of the existence of apps and software like WordPress, Squarespace, Tumblr, et al. It’s easy to make something and get it out there into the world, and these are all gateway drugs to hard coding!
2. You’ll understand how it feels
How it feels to be so proud that something actually works that you momentarily don’t notice if the kerning is off or the padding is inconsistent. How it feels to see your site appear when you’ve redirected a URL. How it feels when you just can’t work out where that one extra space is in a line of PHP that has killed your whole site.
3. It makes you a designer
Not a better designer, it makes you a designer when you are designing how things look and how they work.
4. You learn about movement
Photoshop and Sketch just don’t cut it yet. Until you see your site in a browser or your app on a phone, it’s hard to imagine how it moves. Building your own sites shows you that it’s not just about how the content looks on the screen, but how it moves, interacts and feels.
5. You make techie friends
All the tutorials and forums in the world can’t beat your network of techie friends. Since I started working in web design I have worked with, sat next to, and co-created with some of the greatest developers. Developers who’ve shared their knowledge, encouraged me to build things, patiently explained HTML, CSS, servers, divs, web fonts, iOS development. There has been no void, no versus, very few battles; just people who share an interest and love of making things.
6. You will own domain names
When something is paid for, online and searchable then it’s real and you’ve got to put the work in. Buying domains has taught me how to stop procrastinating, but also about DNS, FTP, email, and how servers work.
7. People will ask you to do things
Learning about code and development opens a whole new world of design. When you put your own personal websites and projects out there people ask you to do more things. OK, so sometimes those things are “Make me a website for free”, but more often it’s cool things like “Come and speak at my conference”, “Write an article for my magazine” and “Collaborate with me.”
8. The young people are coming!
They love typography, they love print, they love layout, but they’ve known how to put a website together since they started their first blog aged five and they show me clever apps they’ve knocked together over the weekend! They’re new, they’re fluid, and they’re better than us!
9. Your portfolio is your portfolio
OK, it’s an obvious one, but as designers our work is our CV, our legacy! We need to show our skill, our attention to detail and our creativity in the way we showcase our work. Building your portfolio is the best way to start building your own websites. (And please be that designer who’s bothered to work out how to change the Squarespace favicon!)
10. It keeps you fluid!
Building your own websites is tough. You’ll never be happy with it, you’ll constantly be updating it to keep up with technology and fashion, and by the time you’ve finished it you’ll want to start all over again. Perfect for forcing you to stay up-to-date with what’s going on in the industry.
",2015,Ros Horner,roshorner,2015-12-12T00:00:00+00:00,https://24ways.org/2015/be-fluid-with-your-design-skills-build-your-own-sites/,code
64,Being Responsive to the Small Things,"It’s that time of the year again to trim the tree with decorations. Or maybe a DOM tree?
Any web page is made of HTML elements that lay themselves out in a tree structure. We start at the top and then have multiple branches with branches that branch out from there.
To decorate our tree, we use CSS to specify which branches should receive the tinsel we wish to adorn upon it. It’s all so lovely.
In years past, this was rather straightforward. But these days, our trees need to be versatile. They need to be responsive!
Responsive web design is pretty wonderful, isn’t it? Based on our viewport, we can decide how elements on the page should change their appearance to accommodate various constraints using media queries.
Clearleft have a delightfully clean and responsive site
Alas, it’s not all sunshine, lollipops, and rainbows.
With complex layouts, we may have design chunks — let’s call them components — that appear in different contexts. Each context may end up providing its own constraints on the design, both in its default state and in its possibly various responsive states.
Media queries, however, limit us to the context of the entire viewport, not individual containers on the page. For every container our component lives in, we need to specify how to rearrange things in that context. The more complex the system, the more contexts we need to write code for.
@media (min-width: 800px) {
.features > .component { }
.sidebar > .component {}
.grid > .component {}
}
Each new component and each new breakpoint just makes the entire system that much more difficult to maintain.
@media (min-width: 600px) {
.features > .component { }
.grid > .component {}
}
@media (min-width: 800px) {
.features > .component { }
.sidebar > .component {}
.grid > .component {}
}
@media (min-width: 1024px) {
.features > .component { }
}
Enter container queries
Container queries, also known as element queries, allow you to specify conditional CSS based on the width (or maybe height) of the container that an element lives in. In doing so, you no longer have to consider the entire page and the interplay of all the elements within.
With container queries, you’ll be able to consider the breakpoints of just the component you’re designing. As a result, you end up specifying less code and the components you develop have fewer dependencies on the things around them. (I guess that makes your components more independent.)
Awesome, right?
There’s only one catch.
Browsers can’t do container queries. There’s not even an official specification for them yet. The Responsive Issues (née Images) Community Group is looking into solving how such a thing would actually work.
See, container queries are tricky from an implementation perspective. The contents of a container can affect the size of the container. Because of this, you end up with troublesome circular references.
For example, if the width of the container is under 500px then the width of the child element should be 600px, and if the width of the container is over 500px then the width of the child element should be 400px.
Can you see the dilemma? When the container is under 500px, the child element resizes to 600px and suddenly the container is 600px. If the container is 600px, then the child element is 400px! And so on, forever. This is bad.
I guess we should all just go home and sulk about how we just got a pile of socks when we really wanted the Millennium Falcon.
Our saviour this Christmas: JavaScript
The three wise men — Tim Berners-Lee, Håkon Wium Lie, and Brendan Eich — brought us the gifts of HTML, CSS, and JavaScript.
To date, there are a handful of open source solutions to fill the gap until a browser implementation sees the light of day.
Elementary by Scott Jehl
ElementQuery by Tyson Matanich
EQ.js by Sam Richards
CSS Element Queries from Marcj
Using any of these can sometimes feel like your toy broke within ten minutes of unwrapping it.
Each take their own approach on how to specify the query conditions. For example, Elementary, the smallest of the group, only supports min-width declarations made in a :before selector.
.mod-foo:before {
content: “300 410 500”;
}
The script loops through all the elements that you specify, reading the content property and then setting an attribute value on the HTML element, allowing you to use CSS to style that condition.
.mod-foo[data-minwidth~=""300""] {
background: blue;
}
To get the script to run, you’ll need to set up event handlers for when the page loads and for when it resizes.
window.addEventListener( ""load"", window.elementary, false );
window.addEventListener( ""resize"", window.elementary, false );
This works okay for static sites but breaks down on pages where elements can expand or contract, or where new content is dynamically inserted.
In the case of EQ.js, the implementation requires the creation of the breakpoints in the HTML. That means that you have implementation details in HTML, JavaScript, and CSS. (Although, with the JavaScript, once it’s in the build system, it shouldn’t ever be much of a concern unless you’re tracking down a bug.)
Another problem you may run into is the use of content delivery networks (CDNs) or cross-origin security issues. The ElementQuery and CSS Element Queries libraries need to be able to read the CSS file. If you are unable to set up proper cross-origin resource sharing (CORS) headers, these libraries won’t help.
At Shopify, for example, we had all of these problems. The admin that store owners use is very dynamic and the CSS and JavaScript were being loaded from a CDN that prevented the JavaScript from reading the CSS.
To go responsive, the team built their own solution — one similar to the other scripts above, in that it loops through elements and adds or removes classes (instead of data attributes) based on minimum or maximum width.
The caveat to this particular approach is that the declaration of breakpoints had to be done in JavaScript.
elements = [
{ ‘module’: “.carousel”, “className”:’alpha’, minWidth: 768, maxWidth: 1024 },
{ ‘module’: “.button”, “className”:’beta’, minWidth: 768, maxWidth: 1024 } ,
{ ‘module’: “.grid”, “className”:’cappa’, minWidth: 768, maxWidth: 1024 }
]
With that done, the script then had to be set to run during various events such as inserting new content via Ajax calls. This sometimes reveals itself in flashes of unstyled breakpoints (FOUB). An unfortunate side effect but one largely imperceptible.
Using this approach, however, allowed the Shopify team to make the admin responsive really quickly. Each member of the team was able to tackle the responsive story for a particular component without much concern for how all the other components would react.
Each element responds to its own breakpoint that would amount to dozens of breakpoints using traditional breakpoints. This approach allows for a truly fluid and adaptive interface for all screens.
Christmas is over
I wish I were the bearer of greater tidings and cheer. It’s not all bad, though. We may one day see browsers implement container queries natively. At which point, we shall all rejoice!",2015,Jonathan Snook,jonathansnook,2015-12-19T00:00:00+00:00,https://24ways.org/2015/being-responsive-to-the-small-things/,code
65,The Accessibility Mindset,"Accessibility is often characterized as additional work, hard to learn and only affecting a small number of people. Those myths have no logical foundation and often stem from outdated information or misconceptions.
Indeed, it is an additional skill set to acquire, quite like learning new JavaScript frameworks, CSS layout techniques or new HTML elements. But it isn’t particularly harder to learn than those other skills.
A World Health Organization (WHO) report on disabilities states that,
[i]ncluding children, over a billion people (or about 15% of the world’s population) were estimated to be living with disability.
Being disabled is not as unusual as one might think. Due to chronic health conditions and older people having a higher risk of disability, we are also currently paving the cowpath to an internet that we can still use in the future.
Accessibility has a very close relationship with usability, and advancements in accessibility often yield improvements in the usability of a website. Websites are also more adaptable to users’ needs when they are built in an accessible fashion.
Beyond the bare minimum
In the time of table layouts, web developers could create code that passed validation rules but didn’t adhere to the underlying semantic HTML model. We later developed best practices, like using lists for navigation, and with HTML5 we started to wrap those lists in nav elements. Working with accessibility standards is similar. The Web Content Accessibility Guidelines (WCAG) 2.0 can inform your decision to make websites accessible and can be used to test that you met the success criteria. What it can’t do is measure how well you met them.
W3C developed a long list of techniques that can be used to make your website accessible, but you might find yourself in a situation where you need to adapt those techniques to be the most usable solution for your particular problem.
The checkbox below is implemented in an accessible way: The input element has an id and the label associated with the checkbox refers to the input using the for attribute. The hover area is shown with a yellow background and a black dotted border:
Open video
The label is clickable and the checkbox has an accessible description. Job done, right? Not really. Take a look at the space between the label and the checkbox:
Open video
The gutter is created using a right margin which pushes the label to the right. Users would certainly expect this space to be clickable as well. The simple solution is to wrap the label around the checkbox and the text:
Open video
You can also set the label to display:block; to further increase the clickable area:
Open video
And while we’re at it, users might expect the whole box to be clickable anyway. Let’s apply the CSS that was on a wrapping div element to the label directly:
Open video
The result enhances the usability of your form element tremendously for people with lower dexterity, using a voice mouse, or using touch interfaces. And we only used basic HTML and CSS techniques; no JavaScript was added and not one extra line of CSS.
Button Example
The button below looks like a typical edit button: a pencil icon on a real button element. But if you are using a screen reader or a braille keyboard, the button is just read as “button” without any indication of what this button is for.
Open video
A screen reader announcing a button. Contains audio.
The code snippet shows why the button is not properly announced:
An icon font is used to display the icon and no text alternative is given. A possible solution to this problem is to use the title or aria-label attributes, which solves the alternative text use case for screen reader users:
Open video
A screen reader announcing a button with a title.
However, screen readers are not the only way people with and without disabilities interact with websites. For example, users can reset or change font families and sizes at will. This helps many users make websites easier to read, including people with dyslexia. Your icon font might be replaced by a font that doesn’t include the glyphs that are icons. Additionally, the icon font may not load for users on slow connections, like on mobile phones inside trains, or because users decided to block external fonts altogether. The following screenshots show the mobile GitHub view with and without external fonts:
The mobile GitHub view with and without external fonts.
Even if the title/aria-label approach was used, the lack of visual labels is a barrier for most people under those circumstances. One way to tackle this is using the old-fashioned img element with an appropriate alt attribute, but surprisingly not every browser displays the alternative text visually when the image doesn’t load.
Providing always visible text is an alternative that can work well if you have the space. It also helps users understand the meaning of the icons.
This also reads just fine in screen readers:
Open video
A screen reader announcing the revised button.
Clever usability enhancements don’t stop at a technical implementation level. Take the BBC iPlayer pages as an example: when a user navigates the “captioned videos” or “audio description” categories and clicks on one of the videos, captions or audio descriptions are automatically switched on. Small things like this enhance the usability and don’t need a lot of engineering resources. It is more about connecting the usability dots for people with disabilities. Read more about the BBC iPlayer accessibility case study.
More information
W3C has created several documents that make it easier to get the gist of what web accessibility is and how it can benefit everyone. You can find out “How People with Disabilities Use the Web”, there are “Tips for Getting Started” for developers, designers and content writers. And for the more seasoned developer there is a set of tutorials on web accessibility, including information on crafting accessible forms and how to use images in an accessible way.
Conclusion
You can only produce a web project with long-lasting accessibility if accessibility is not an afterthought. Your organization, your division, your team need to think about accessibility as something that is the foundation of your website or project. It needs to be at the same level as performance, code quality and design, and it needs the same attention. Users often don’t notice when those fundamental aspects of good website design and development are done right. But they’ll always know when they are implemented poorly.
If you take all this into consideration, you can create accessibility solutions based on the available data and bring accessibility to people who didn’t know they’d need it:
Open video
In this video from the latest Apple keynote, the Apple TV is operated by voice input through a remote. When the user asks “What did she say?” the video jumps back fifteen seconds and captions are switched on for a brief time. All three, the remote, voice input and captions have their roots in assisting people with disabilities. Now they benefit everyone.",2015,Eric Eggert,ericeggert,2015-12-17T00:00:00+00:00,https://24ways.org/2015/the-accessibility-mindset/,code
68,"Grid, Flexbox, Box Alignment: Our New System for Layout","Three years ago for 24 ways 2012, I wrote an article about a new CSS layout method I was excited about. A specification had emerged, developed by people from the Internet Explorer team, bringing us a proper grid system for the web. In 2015, that Internet Explorer implementation is still the only public implementation of CSS grid layout. However, in 2016 we should be seeing it in a new improved form ready for our use in browsers.
Grid layout has developed hidden behind a flag in Blink, and in nightly builds of WebKit and, latterly, Firefox. By being developed in this way, breaking changes could be safely made to the specification as no one was relying on the experimental implementations in production work.
Another new layout method has emerged over the past few years in a more public and perhaps more painful way. Shipped prefixed in browsers, The flexible box layout module (flexbox) was far too tempting for developers not to use on production sites. Therefore, as changes were made to the specification, we found ourselves with three different flexboxes, and browser implementations that did not match one another in completeness or in the version of specified features they supported.
Owing to the different ways these modules have come into being, when I present on grid layout it is often the very first time someone has heard of the specification. A question I keep being asked is whether CSS grid layout and flexbox are competing layout systems, as though it might be possible to back the loser in a CSS layout competition. The reality, however, is that these two methods will sit together as one system for doing layout on the web, each method playing to certain strengths and serving particular layout tasks.
If there is to be a loser in the battle of the layouts, my hope is that it will be the layout frameworks that tie our design to our markup. They have been a necessary placeholder while we waited for a true web layout system, but I believe that in a few years time we’ll be easily able to date a website to circa 2015 by seeing
or
in the markup.
In this article, I’m going to take a look at the common features of our new layout systems, along with a couple of examples which serve to highlight the differences between them.
To see the grid layout examples you will need to enable grid in your browser. The easiest thing to do is to enable the experimental web platform features flag in Chrome. Details of current browser support can be found here.
Relationship
Items only become flex or grid items if they are a direct child of the element that has display:flex, display:grid or display:inline-grid applied. Those direct children then understand themselves in the context of the complete layout. This makes many things possible. It’s the lack of relationship between elements that makes our existing layout methods difficult to use. If we float two columns, left and right, we have no way to tell the shorter column to extend to the height of the taller one. We have expended a lot of effort trying to figure out the best way to make full-height columns work, using techniques that were never really designed for page layout.
At a very simple level, the relationship between elements means that we can easily achieve full-height columns. In flexbox:
See the Pen Flexbox equal height columns by rachelandrew (@rachelandrew) on CodePen.
And in grid layout (requires a CSS grid-supporting browser):
See the Pen Grid equal height columns by rachelandrew (@rachelandrew) on CodePen.
Alignment
Full-height columns rely on our flex and grid items understanding themselves as part of an overall layout. They also draw on a third new specification: the box alignment module. If vertical centring is a gift you’d like to have under your tree this Christmas, then this is the box you’ll want to unwrap first.
The box alignment module takes the alignment and space distribution properties from flexbox and applies them to other layout methods. That includes grid layout, but also other layout methods. Once implemented in browsers, this specification will give us true vertical centring of all the things.
Our examples above achieved full-height columns because the default value of align-items is stretch. The value ensured our columns stretched to the height of the tallest. If we want to use our new vertical centring abilities on all items, we would set align-items:center on the container. To align one flex or grid item, apply the align-self property.
The examples below demonstrate these alignment properties in both grid layout and flexbox. The portrait image of Widget the cat is aligned with the default stretch. The other three images are aligned using different values of align-self.
Take a look at an example in flexbox:
See the Pen Flexbox alignment by rachelandrew (@rachelandrew) on CodePen.
And also in grid layout (requires a CSS grid-supporting browser):
See the Pen Grid alignment by rachelandrew (@rachelandrew) on CodePen.
The alignment properties used with CSS grid layout.
Fluid grids
A cornerstone of responsive design is the concept of fluid grids.
“[…]every aspect of the grid—and the elements laid upon it—can be expressed as a proportion relative to its container.”
—Ethan Marcotte, “Fluid Grids”
The method outlined by Marcotte is to divide the target width by the context, then use that value as a percentage value for the width property on our element.
h1 {
margin-left: 14.575%; /* 144px / 988px = 0.14575 */
width: 70.85%; /* 700px / 988px = 0.7085 */
}
In more recent years, we’ve been able to use calc() to simplify this (at least, for those of us able to drop support for Internet Explorer 8). However, flexbox and grid layout make fluid grids simple.
The most basic of flexbox demos shows this fluidity in action. The justify-content property – another property defined in the box alignment module – can be used to create an equal amount of space between or around items. As the available width increases, more space is assigned in proportion.
In this demo, the list items are flex items due to display:flex being added to the ul. I have given them a maximum width of 250 pixels. Any remaining space is distributed equally between the items as the justify-content property has a value of space-between.
See the Pen Flexbox: justify-content by rachelandrew (@rachelandrew) on CodePen.
For true fluid grid-like behaviour, your new flexible friends are flex-grow and flex-shrink. These properties give us the ability to assign space in proportion.
The flexbox flex property is a shorthand for:
flex-grow
flex-shrink
flex-basis
The flex-basis property sets the default width for an item. If flex-grow is set to 0, then the item will not grow larger than the flex-basis value; if flex-shrink is 0, the item will not shrink smaller than the flex-basis value.
flex: 1 1 200px: a flexible box that can grow and shrink from a 200px basis.
flex: 0 0 200px: a box that will be 200px and cannot grow or shrink.
flex: 1 0 200px: a box that can grow bigger than 200px, but not shrink smaller.
In this example, I have a set of boxes that can all grow and shrink equally from a 100 pixel basis.
See the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen.
What I would like to happen is for the first element, containing a portrait image, to take up less width than the landscape images, thus keeping it more in proportion. I can do this by changing the flex-grow value. By giving all the items a value of 1, they all gain an equal amount of the available space after the 100 pixel basis has been worked out.
If I give them all a value of 3 and the first box a value of 1, the other boxes will be assigned three parts of the available space while box 1 is assigned only one part. You can see what happens in this demo:
See the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen.
Once you understand flex-grow, you should easily be able to grasp how the new fraction unit (fr, defined in the CSS grid layout specification) works. Like flex-grow, this unit allows us to assign available space in proportion. In this case, we assign the space when defining our track sizes.
In this demo (which requires a CSS grid-supporting browser), I create a four-column grid using the fraction unit to define my track sizes. The first track is 1fr in width, and the others 2fr.
grid-template-columns: 1fr 2fr 2fr 2fr;
See the Pen Grid fraction units by rachelandrew (@rachelandrew) on CodePen.
The four-track grid.
Separation of concerns
My younger self petitioned my peers to stop using tables for layout and to move to CSS. One of the rallying cries of that movement was the concept of separating our source and content from how they were displayed. It was something of a failed promise given the tools we had available: the display leaked into the markup with the need for redundant elements to cope with browser bugs, or visual techniques that just could not be achieved without supporting markup.
Browsers have improved, but even now we can find ourselves compromising the ideal document structure so we can get the layout we want at various breakpoints. In some ways, the situation has returned to tables-for-layout days. Many of the current grid frameworks rely on describing our layout directly in the markup. We add divs for rows, and classes to describe the number of desired columns. We nest these constructions of divs inside one another.
Here is a snippet from the Bootstrap grid examples – two columns with two nested columns:
.col-md-8
.col-md-6
.col-md-6
.col-md-4
Not a million miles away from something I might have written in 1999.