{"rowid": 49, "title": "Universal React", "contents": "One of the libraries to receive a huge amount of focus in 2015 has been ReactJS, a library created by Facebook for building user interfaces and web applications.\nMore generally we\u2019ve seen an even greater rise in the number of applications built primarily on the client side with most of the logic implemented in JavaScript. One of the main issues with building an app in this way is that you immediately forgo any customers who might browse with JavaScript turned off, and you can also miss out on any robots that might visit your site to crawl it (such as Google\u2019s search bots). Additionally, we gain a performance improvement by being able to render from the server rather than having to wait for all the JavaScript to be loaded and executed.\nThe good news is that this problem has been recognised and it is possible to build a fully featured client-side application that can be rendered on the server. The way in which these apps work is as follows:\n\nThe user visits www.yoursite.com and the server executes your JavaScript to generate the HTML it needs to render the page.\nIn the background, the client-side JavaScript is executed and takes over the duty of rendering the page.\nThe next time a user clicks, rather than being sent to the server, the client-side app is in control.\nIf the user doesn\u2019t have JavaScript enabled, each click on a link goes to the server and they get the server-rendered content again.\n\nThis means you can still provide a very quick and snappy experience for JavaScript users without having to abandon your non-JS users. We achieve this by writing JavaScript that can be executed on the server or on the client (you might have heard this referred to as isomorphic) and using a JavaScript framework that\u2019s clever enough handle server- or client-side execution. Currently, ReactJS is leading the way here, although Ember and Angular are both working on solutions to this problem.\nIt\u2019s worth noting that this tutorial assumes some familiarity with React in general, its syntax and concepts. If you\u2019d like a refresher, the ReactJS docs are a good place to start.\n\u00a0Getting started\nWe\u2019re going to create a tiny ReactJS application that will work on the server and the client. First we\u2019ll need to create a new project and install some dependencies. In a new, blank directory, run:\nnpm init -y\nnpm install --save ejs express react react-router react-dom\nThat will create a new project and install our dependencies:\n\nejs is a templating engine that we\u2019ll use to render our HTML on the server.\nexpress is a small web framework we\u2019ll run our server on.\nreact-router is a popular routing solution for React so our app can fully support and respect URLs.\nreact-dom is a small React library used for rendering React components.\n\nWe\u2019re also going to write all our code in ECMAScript 6, and therefore need to install BabelJS and configure that too.\nnpm install --save-dev babel-cli babel-preset-es2015 babel-preset-react\nThen, create a .babelrc file that contains the following:\n{\n \"presets\": [\"es2015\", \"react\"]\n}\nWhat we\u2019ve done here is install Babel\u2019s command line interface (CLI) tool and configured it to transform our code from ECMAScript 6 (or ES2015) to ECMAScript 5, which is more widely supported. We\u2019ll need the React transforms when we start writing JSX when working with React.\nCreating a server\nFor now, our ExpressJS server is pretty straightforward. All we\u2019ll do is render a view that says \u2018Hello World\u2019. Here\u2019s our server code:\nimport express from 'express';\nimport http from 'http';\n\nconst app = express();\n\napp.use(express.static('public'));\n\napp.set('view engine', 'ejs');\n\napp.get('*', (req, res) => {\n res.render('index');\n});\n\nconst server = http.createServer(app);\n\nserver.listen(3003);\nserver.on('listening', () => {\n console.log('Listening on 3003');\n});\nHere we\u2019re using ES6 modules, which I wrote about on 24 ways last year, if you\u2019d like a reminder. We tell the app to render the index view on any GET request (that\u2019s what app.get('*') means, the wildcard matches any route).\nWe now need to create the index view file, which Express expects to be defined in views/index.ejs:\n\n\n \n My App\n \n\n \n Hello World\n \n\nFinally, we\u2019re ready to run the server. Because we installed babel-cli earlier we have access to the babel-node executable, which will transform all your code before running it through node. Run this command:\n./node_modules/.bin/babel-node server.js\nAnd you should now be able to visit http://localhost:3003 and see \u2018Hello World\u2019 right there:\n\nBuilding the React app\nNow we\u2019ll build the React application entirely on the server, before adding the client-side JavaScript right at the end. Our app will have two routes, / and /about which will both show a small amount of content. This will demonstrate how to use React Router on the server side to make sure our React app plays nicely with URLs.\nFirstly, let\u2019s update views/index.ejs. Our server will figure out what HTML it needs to render, and pass that into the view. We can pass a value into our view when we render it, and then use EJS syntax to tell it to output that data. Update the template file so the body looks like so:\n\n <%- markup %>\n\nNext, we\u2019ll define the routes we want our app to have using React Router. For now we\u2019ll just define the index route, and not worry about the /about route quite yet. We could define our routes in JSX, but I think for server-side rendering it\u2019s clearer to define them as an object. Here\u2019s what we\u2019re starting with:\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n }\n ]\n}\nThese are just placed at the top of server.js, after the import statements. Later we\u2019ll move these into a separate file, but for now they are fine where they are.\nNotice how I define first that the AppComponent should be used at the '' path, which effectively means it matches every single route and becomes a container for all our other components. Then I give it a child route of /, which will match the IndexComponent. Before we hook these routes up with our server, let\u2019s quickly define components/app.js and components/index.js. app.js looks like so:\nimport React from 'react';\n\nexport default class AppComponent extends React.Component {\n render() {\n return (\n
\n

Welcome to my App

\n { this.props.children }\n
\n );\n }\n}\nWhen a React Router route has child components, they are given to us in the props under the children key, so we need to include them in the code we want to render for this component. The index.js component is pretty bland:\nimport React from 'react';\n\nexport default class IndexComponent extends React.Component {\n render() {\n return (\n
\n

This is the index page

\n
\n );\n }\n}\nServer-side routing with React Router\nHead back into server.js, and firstly we\u2019ll need to add some new imports:\nimport React from 'react';\nimport { renderToString } from 'react-dom/server';\nimport { match, RoutingContext } from 'react-router';\n\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\nThe ReactDOM package provides react-dom/server which includes a renderToString method that takes a React component and produces the HTML string output of the component. It\u2019s this method that we\u2019ll use to render the HTML from the server, generated by React. From the React Router package we use match, a function used to find a matching route for a URL; and RoutingContext, a React component provided by React Router that we\u2019ll need to render. This wraps up our components and provides some functionality that ties React Router together with our app. Generally you don\u2019t need to concern yourself about how this component works, so don\u2019t worry too much.\nNow for the good bit: we can update our app.get('*') route with the code that matches the URL against the React routes:\napp.get('*', (req, res) => {\n // routes is our object of React routes defined above\n match({ routes, location: req.url }, (err, redirectLocation, props) => {\n if (err) {\n // something went badly wrong, so 500 with a message\n res.status(500).send(err.message);\n } else if (redirectLocation) {\n // we matched a ReactRouter redirect, so redirect from the server\n res.redirect(302, redirectLocation.pathname + redirectLocation.search);\n } else if (props) {\n // if we got props, that means we found a valid component to render\n // for the given route\n const markup = renderToString();\n\n // render `index.ejs`, but pass in the markup we want it to display\n res.render('index', { markup })\n\n } else {\n // no route match, so 404. In a real app you might render a custom\n // 404 view here\n res.sendStatus(404);\n }\n });\n});\nWe call match, giving it the routes object we defined earlier and req.url, which contains the URL of the request. It calls a callback function we give it, with err, redirectLocation and props as the arguments. The first two conditionals in the callback function just deal with an error occuring or a redirect (React Router has built in redirect support). The most interesting bit is the third conditional, else if (props). If we got given props and we\u2019ve made it this far it means we found a matching component to render and we can use this code to render it:\n...\n} else if (props) {\n // if we got props, that means we found a valid component to render\n // for the given route\n const markup = renderToString();\n\n // render `index.ejs`, but pass in the markup we want it to display\n res.render('index', { markup })\n} else {\n ...\n}\nThe renderToString method from ReactDOM takes that RoutingContext component we mentioned earlier and renders it with the properties required. Again, you need not concern yourself with what this specific component does or what the props are. Most of this is data that React Router provides for us on top of our components.\nNote the {...props}, which is a neat bit of JSX syntax that spreads out our object into key value properties. To see this better, note the two pieces of JSX code below, both of which are equivalent:\n\n\n// OR:\n\nconst props = { a: \"foo\", b: \"bar\" };\n\nRunning the server again\nI know that felt like a lot of work, but the good news is that once you\u2019ve set this up you are free to focus on building your React components, safe in the knowledge that your server-side rendering is working. To check, restart the server and head to http://localhost:3003 once more. You should see it all working!\n\nRefactoring and one more route\nBefore we move on to getting this code running on the client, let\u2019s add one more route and do some tidying up. First, move our routes object out into routes.js:\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\n\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n }\n ]\n}\n\nexport { routes };\nAnd then update server.js. You can remove the two component imports and replace them with:\nimport { routes } from './routes';\nFinally, let\u2019s add one more route for ./about and links between them. Create components/about.js:\nimport React from 'react';\n\nexport default class AboutComponent extends React.Component {\n render() {\n return (\n
\n

A little bit about me.

\n
\n );\n }\n}\nAnd then you can add it to routes.js too:\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\nimport AboutComponent from './components/about';\n\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n },\n {\n path: '/about',\n component: AboutComponent\n }\n ]\n}\n\nexport { routes };\nIf you now restart the server and head to http://localhost:3003/about` you\u2019ll see the about page!\n\nFor the finishing touch we\u2019ll use the React Router link component to add some links between the pages. Edit components/app.js to look like so:\nimport React from 'react';\nimport { Link } from 'react-router';\n\nexport default class AppComponent extends React.Component {\n render() {\n return (\n
\n

Welcome to my App

\n
    \n
  • Home
  • \n
  • About
  • \n
\n { this.props.children }\n
\n );\n }\n}\nYou can now click between the pages to navigate. However, everytime we do so the requests hit the server. Now we\u2019re going to make our final change, such that after the app has been rendered on the server once, it gets rendered and managed in the client, providing that snappy client-side app experience.\nClient-side rendering\nFirst, we\u2019re going to make a small change to views/index.ejs. React doesn\u2019t like rendering directly into the body and will give a warning when you do so. To prevent this we\u2019ll wrap our app in a div:\n\n
<%- markup %>
\n \n\nI\u2019ve also added in a script tag to build.js, which is the file we\u2019ll generate containing all our client-side code.\nNext, create client-render.js. This is going to be the only bit of JavaScript that\u2019s exclusive to the client side. In it we need to pull in our routes and render them to the DOM.\nimport React from 'react';\nimport ReactDOM from 'react-dom';\nimport { Router } from 'react-router';\n\nimport { routes } from './routes';\n\nimport createBrowserHistory from 'history/lib/createBrowserHistory';\n\nReactDOM.render(\n ,\n document.getElementById('app')\n)\nThe first thing you might notice is the mention of createBrowserHistory. React Router is built on top of the history module, a module that listens to the browser\u2019s address bar and parses the new location. It has many modes of operation: it can keep track using a hashbang, such as http://localhost/#!/about (this is the default), or you can tell it to use the HTML5 history API by calling createBrowserHistory, which is what we\u2019ve done. This will keep the URLs nice and neat and make sure the client and the server are using the same URL structure. You can read more about React Router and histories in the React Router documentation.\nFinally we use ReactDOM.render and give it the Router component, telling it about all our routes, and also tell ReactDOM where to render, the #app element.\nGenerating build.js\nWe\u2019re actually almost there! The final thing we need to do is generate our client side bundle. For this we\u2019re going to use webpack, a module bundler that can take our application, follow all the imports and generate one large bundle from them. We\u2019ll install it and babel-loader, a webpack plugin for transforming code through Babel.\nnpm install --save-dev webpack babel-loader\nTo run webpack we just need to create a configuration file, called webpack.config.js. Create the file in the root of our application and add the following code:\nvar path = require('path');\nmodule.exports = {\n entry: path.join(process.cwd(), 'client-render.js'),\n output: {\n path: './public/',\n filename: 'build.js'\n },\n module: {\n loaders: [\n {\n test: /.js$/,\n loader: 'babel'\n }\n ]\n }\n}\nNote first that this file can\u2019t be written in ES6 as it doesn\u2019t get transformed. The first thing we do is tell webpack the main entry point for our application, which is client-render.js. We use process.cwd() because webpack expects an exact location \u2013 if we just gave it the string \u2018client-render.js\u2019, webpack wouldn\u2019t be able to find it.\nNext, we tell webpack where to output our file, and here I\u2019m telling it to place the file in public/build.js. Finally we tell webpack that every time it hits a file that ends in .js, it should use the babel-loader plugin to transform the code first.\nNow we\u2019re ready to generate the bundle!\n./node_modules/.bin/webpack\nThis will take a fair few seconds to run (on my machine it\u2019s about seven or eight), but once it has it will have created public/build.js, a client-side bundle of our application. If you restart your server once more you\u2019ll see that we can now navigate around our application without hitting the server, because React on the client takes over. Perfect!\nThe first bundle that webpack generates is pretty slow, but if you run webpack -w it will go into watch mode, where it watches files for changes and regenerates the bundle. The key thing is that it only regenerates the small pieces of the bundle it needs, so while the first bundle is very slow, the rest are lightning fast. I recommend leaving webpack constantly running in watch mode when you\u2019re developing.\nConclusions\nFirst, if you\u2019d like to look through this code yourself you can find it all on GitHub. Feel free to raise an issue there or tweet me if you have any problems or would like to ask further questions.\nNext, I want to stress that you shouldn\u2019t use this as an excuse to build all your apps in this way. Some of you might be wondering whether a static site like the one we built today is worth its complexity, and you\u2019d be right. I used it as it\u2019s an easy example to work with but in the future you should carefully consider your reasons for wanting to build a universal React application and make sure it\u2019s a suitable infrastructure for you.\nWith that, all that\u2019s left for me to do is wish you a very merry Christmas and best of luck with your React applications!", "year": "2015", "author": "Jack Franklin", "author_slug": "jackfranklin", "published": "2015-12-05T00:00:00+00:00", "url": "https://24ways.org/2015/universal-react/", "topic": "code"} {"rowid": 50, "title": "Make a Comic", "contents": "For something slightly different over Christmas, why not step away from your computer and make a comic? \nDefinitely not the author working on a comic in the studio, with the desk displaying some of the things you need to make a comic on paper.\nWhy make a comic?\nFirst of all, it\u2019s truly fun and it\u2019s not that difficult. If you\u2019re a designer, you can use skills you already have, so why not take some time to indulge your aesthetic whims and make something for yourself, rather than for a client or your company. And you can use a computer \u2013 or not.\nIf you\u2019re an interaction designer, it\u2019s likely you\u2019ve already made a storyboard or flow, or designed some characters for personas. This is a wee jump away from that, to the realm of storytelling and navigating human emotions through characters who may or may not be human. Similar medium and skills, different content. \nIt\u2019s not a client deliverable but something that stands by itself, and you\u2019ve nobody\u2019s criteria to meet except those that exist in your imagination! \nThanks to your brain and the alchemy of comics, you can put nearly anything in a sequence and your brain will find a way to make sense of it. Scott McCloud wrote about the non sequitur in comics: \n\n\u201cThere is a kind of alchemy at work in the space between panels which can help us find meaning or resonance in even the most jarring of combinations.\u201d \n\nHere\u2019s an example of a non sequitur from Scott McCloud\u2019s Understanding Comics \u2013 the images bear no relation to one another, but since they\u2019re in a sequence our brains do their best to understand it: \n\nOnce you know this it takes the pressure off somewhat. It\u2019s a fun thing to keep in mind and experiment with in your comics! \nMaterials needed\n\nA4 copy/printing paper \nHB pencil for light drawing\nDip pen and waterproof Indian ink \nBristol board (or any good quality card with a smooth, durable surface) \n\nStep 1: Get ideas\nYou\u2019d be surprised where you can take a small grain of an idea and develop it into an interesting comic. Think about a funny conversation you had, or any irrational fears, habits, dreams or anything else. Just start writing and drawing. Having ideas is hard, I know, but you will get some ideas when you start working. \nOne way to keep track of ideas is to keep a sketch diary, capturing funny conversations and other events you could use in comics later. \nYou might want to just sketch out the whole comic very roughly if that helps. I tend to sketch the story first, but it usually changes drastically during step 2.\nStep 2: Edit your story using thumbnails\nHow thumbnailing works.\nWhy use thumbnails? You can move them around or get rid of them! \nDrawings are harder and much slower to edit than words, so you need to draw something very quick and very rough. You don\u2019t have to care about drawing quality at this point. \nYou might already have a drafted comic from the previous step; now you can split each panel up into a thumbnail like the image above. \nGet an A4 sheet of printing paper and tear it up into squares. A thumbnail equals a comic panel. Start drawing one panel per thumbnail. This way you can move scenes and parts of the story around as you work on the pacing. It\u2019s an extremely useful tip if you want to expand a moment in time or draw out a dialogue, or if you want to just completely cut scenes. \nStep 3: Plan a layout\nSo you\u2019ve got the story more or less down: you now need to know how they\u2019ll look on the page. Sketch a layout and arrange the thumbnails into the layout.\nThe simplest way to do this is to divide an A4 page into equal panels \u2014 say, nine. But if you want, you can be more creative than that. The Gigantic Beard That Was Evil by Stephen Collins is an excellent example of the scope for using page layout creatively. You can really push the form: play with layout, scale, story and what you think of as a comic.\nStep 4: Draw the comic\nI recommend drawing on A4 Bristol board paper since it has a smooth surface, can tolerate a lot of rubbing out and holds ink well. You can get it from any art shop. \nUsing your thumbnails for reference, draw the comic lightly using an HB pencil. Don\u2019t make the line so heavy that it can\u2019t be erased (since you\u2019ll ink over the lines later).\nStep 5: Ink the comic\nImage before colour was added.\nYou\u2019ve drawn your story. Well done!\nNow for the fun part. I recommend using a dip pen and some waterproof ink. Why waterproof? If you want, you can add an ink wash later, or even paint it. \nIf you don\u2019t have a dip pen, you could also use any quality pen. Carefully go over your pencilled lines with the pen, working from top left to right and down, to avoid smudging it. It\u2019s unfortunately easy to smudge the ink from the dip pen, so I recommend practising first. \nYou\u2019ve made a comic! \nStep 6: Adding colour\nComics traditionally had a limited colour palette before computers (here\u2019s an in-depth explanation if you\u2019re curious). You can actually do a huge amount with a restricted colour palette. Ellice Weaver\u2019s comics show how very nicely how you can paint your work using a restricted palette. So for the next step, resist the temptation to add ALL THE COLOURS and consider using a limited palette. \nOnce the ink is completely dry, erase the pencilled lines and you\u2019ll be left with a beautiful inked black and white drawing. \nYou could use a computer for this part. You could also photocopy it and paint straight on the copy. If you\u2019re feeling really brave, you could paint straight on the original. But I\u2019d suggest not doing this if it\u2019s your first try at painting! \nWhat follows is an extremely basic guide for painting using Photoshop, but there are hundreds of brilliant articles out there and different techniques for digital painting. \nHow to paint your comic using Photoshop\n\nScan the drawing and open it in Photoshop. You can adjust the levels (Image \u2192 Adjustments \u2192 Levels) to make the lines darker and crisper, and the paper invisible. At this stage, you can erase any smudges or mistakes. With a Wacom tablet, you could even completely redraw parts! Computers are just amazing. Keep the line art as its own layer. \nAdd a new layer on top of the lines, and set the layer state from normal to multiply. This means you can paint your comic without obscuring your lines. Rename the layer something else, so you can keep track.\nStart blocking in colour. And once you\u2019re happy with that, experiment with adding tone and texture.\n\nChristmas comic challenge!\nWhy not challenge yourself to make a short comic over Christmas? If you make one, share it in the comments. Or show me on Twitter \u2014 I\u2019d love to see it.\n\nCredit: Many of these techniques were learned on the Royal Drawing School\u2019s brilliant \u2018Drawing the Graphic Novel\u2019 course.", "year": "2015", "author": "Rebecca Cottrell", "author_slug": "rebeccacottrell", "published": "2015-12-20T00:00:00+00:00", "url": "https://24ways.org/2015/make-a-comic/", "topic": "design"} {"rowid": 51, "title": "Blow Your Own Trumpet", "contents": "Even if your own trumpet\u2019s tiny and fell out of a Christmas cracker, blowing it isn\u2019t something that everyone\u2019s good at. Some people find selling themselves and what they do difficult. But, you know what? Boo hoo hoo. If you want people to buy something, the reality is you\u2019d better get good at selling, especially if that something is you.\nFor web professionals, the best place to tell potential business customers or possible employers about what you do is on your own website. You can write what you want and how you want, but that doesn\u2019t make knowing what to write any easier. As a matter of fact, writing for yourself often proves harder than writing for someone else.\nI spent this autumn thinking about what I wanted to say about Stuff & Nonsense on the website we relaunched recently. While I did that, I spoke to other designers about how they struggled to write about their businesses.\nIf you struggle to write well, don\u2019t worry. You\u2019re not on your own. Here are five ways to hit the right notes when writing about yourself and your work.\nBe genuine about who you are\nI\u2019ve known plenty of talented people who run a successful business pretty much single-handed. Somehow they still feel awkward presenting themselves as individuals. They wonder whether describing themselves as a company will give them extra credibility. They especially agonise over using \u201cwe\u201d rather than \u201cI\u201d when describing what they do. These choices get harder when you\u2019re a one-man band trading as a limited company or LLC business entity.\nIf you mainly work alone, don\u2019t describe yourself as anything other than \u201cI\u201d. You might think that saying \u201cwe\u201d makes you appear larger and will give you a better chance of landing bigger and better work, but the moment a prospective client asks, \u201cHow many people are you?\u201d you\u2019ll have some uncomfortable explaining to do. This will distract them from talking about your work and derail your sales process. There\u2019s no need to be anything other than genuine about how you describe yourself. You should be proud to say \u201cI\u201d because working alone isn\u2019t something that many people have the ability, business acumen or talent to do.\nExplain what you actually do\nHow many people do precisely the same job as you? Hundreds? Thousands? The same goes for companies. If yours is a design studio, development team or UX consultancy, there are countless others saying exactly what you\u2019re saying about what you do. Simply stating that you code, design or \u2013 God help me \u2013 \u201chandcraft digital experiences\u201d isn\u2019t enough to make your business sound different from everyone else. Anyone can and usually does say that, but people buy more than deliverables. They buy something that\u2019s unique about you and your business.\nPotentially thousands of companies deliver code and designs the same way as Stuff & Nonsense, but our clients don\u2019t just buy page designs, prototypes and websites from us. They buy our taste for typography, colour and layout, summed up by our \u201cIt\u2019s the taste\u201d tagline and bowler hat tip to the PG Tips chimps. We hope that potential clients will understand what\u2019s unique about us. Think beyond your deliverables to what people actually buy, and sell the uniqueness of that.\nDescribe work in progress\nIt\u2019s sad that current design trends have made it almost impossible to tell one website from another. So many designers now demonstrate finished responsive website designs by pasting them onto iMac, MacBook, iPad and iPhone screens that their portfolios don\u2019t fare much better. Every designer brings their own experience, perspective and process to a project. In my experience, it\u2019s understanding those differences which forms a big part of how a prospective client makes a decision about who to work with. Don\u2019t simply show a prospective client the end result of a previous project; explain your process, the development of your thinking and even the wrong turns you took.\nTraditional case studies, like the one I\u2019ve just written about Stuff & Nonsense\u2019s work for WWF UK, can take a lot of time. That\u2019s probably why many portfolios get out of date very quickly. Designers make new work all the time, so there must be a better way to show more of it more often, to give prospective clients a clearer understanding of what we do. At Stuff & Nonsense our solution was to create a feed where we could post fragments of design work throughout a project. This also meant rewriting our Contract Killer to give us permission to publish work before someone signs it off.\nOutline a client\u2019s experience\nRecently a client took me to one side and offered some valuable advice. She told me that our website hadn\u2019t described anything about the experience she\u2019d had while working with us. She said that knowing more about how we work would\u2019ve helped her make her buying decision.\nWhen a client chooses your business, they\u2019re hoping for more than a successful outcome. They want their project to run smoothly. They want to feel that they made a correct decision when they chose you. If they work for an organisation, they\u2019ll want their good judgement to be recognised too. Our client didn\u2019t recognise her experience because we hadn\u2019t made our own website part of it. Remember, the challenge of creating a memorable user experience starts with selling to the people paying you for it.\nAddress your ideal client\nIt\u2019s important to understand that a portfolio\u2019s job isn\u2019t to document your work, it\u2019s to attract new work from clients you want. Make sure that work you show reflects the work you want, because what you include in your portfolio often leads to more of the same.\nWhen you\u2019re writing for your portfolio and elsewhere on your website, imagine that you\u2019re addressing your ideal client. Picture them sitting opposite and answer the questions they\u2019d ask as you would in conversation. Be direct, funny if that\u2019s appropriate and serious when it\u2019s not. If it helps, ask a friend to read the questions aloud and record what you say in response. This will help make what you write sound natural. I\u2019ve found this technique helps clients write copy too.\nToot your own horn\nSome people confuse expressing confidence in yourself and your work as boastfulness, but in a competitive world the reality is that if you are to succeed, you need to show confidence so that others can show their confidence in you. If you want people to hear you, pick up your trumpet and blow it.", "year": "2015", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2015-12-23T00:00:00+00:00", "url": "https://24ways.org/2015/blow-your-own-trumpet/", "topic": "business"} {"rowid": 52, "title": "Git Rebasing: An Elfin Workshop Workflow", "contents": "This year Santa\u2019s helpers have been tasked with making a garland. It\u2019s a pretty simple task: string beads onto yarn in a specific order. When the garland reaches a specific length, add it to the main workshop garland. Each elf has a specific sequence they\u2019re supposed to chain, which is given to them via a work order. (This is starting to sound like one of those horrible calculus problems. I promise it isn\u2019t. It\u2019s worse; it\u2019s about Git.)\nFor the most part, the system works really well. The elves are able to quickly build up a shared chain because each elf specialises on their own bit of garland, and then links the garland together. Because of this they\u2019re able to work independently, but towards the common goal of making a beautiful garland.\nAt first the elves are really careful with each bead they put onto the garland. They check with one another before merging their work, and review each new link carefully. As time crunches on, the elves pour a little more cheer into the eggnog cooler, and the quality of work starts to degrade. Tensions rise as mistakes are made and unkind words are said. The elves quickly realise they\u2019re going to need a system to change the beads out when mistakes are made in the chain.\nThe first common mistake is not looking to see what the latest chain is that\u2019s been added to the main garland. The garland is huge, and it sits on a roll in one of the corners of the workshop. It\u2019s a big workshop, so it is incredibly impractical to walk all the way to the roll to check what the last link is on the chain. The elves, being magical, have set up a monitoring system that allows them to keep a local copy of the main garland at their workstation. It\u2019s an imperfect system though, so the elves have to request a manual refresh to see the latest copy. They can request a new copy by running the command\ngit pull --rebase=preserve\n(They found that if they ran git pull on its own, they ended up with weird loops of extra beads off the main garland, so they\u2019ve opted to use this method.) This keeps the shared garland up to date, which makes things a lot easier. A visualisation of the rebase process is available.\nThe next thing the elves noticed is that if they worked on the main workshop garland, they were always running into problems when they tried to share their work back with the rest of the workshop. It was fine if they were working late at night by themselves, but in the middle of the day, it was horrible. (I\u2019ve been asked not to talk about that time the fight broke out.) Instead of trying to share everything on their local copy of the main garland, the elves have realised it\u2019s a lot easier to work on a new string and then knot this onto the main garland when their pattern repeat is finished. They generate a new string by issuing the following commands:\ngit checkout master\ngit checkout -b 1234_pattern-name\n1234 represents the work order number and pattern-name describes the pattern they\u2019re adding. Each bead is then added to the new link (git add bead.txt) and locked into place (git commit). Each elf repeats this process until the sequence of beads described in the work order has been added to their mini garland.\nTo combine their work with the main garland, the elves need to make a few decisions. If they\u2019re making a single strand, they issue the following commands:\ngit checkout master\ngit merge --ff-only 1234_pattern-name\nTo share their work they publish the new version of the main garland to the workshop spool with the command git push origin master.\nSometimes this fails. Sharing work fails because the workshop spool has gotten new links added since the elf last updated their copy of the main workshop spool. This makes the elves both happy and sad. It makes them happy because it means the other elves have been working too, but it makes them sad because they now need to do a bit of extra work to close their work order. \nTo update the local copy of the workshop spool, the elf first unlinks the chain they just linked by running the command:\ngit reset --merge ORIG_HEAD\nThis works because the garland magic notices when the elves are doing a particularly dangerous thing and places a temporary, invisible bookmark to the last safe bead in the chain before the dangerous thing happened. The garland no longer has the elf\u2019s work, and can be updated safely. The elf runs the command git pull --rebase=preserve and the changes all the other elves have made are applied locally.\nWith these new beads in place, the elf now has to restring their own chain so that it starts at the right place. To do this, the elf turns back to their own chain (git checkout 1234_pattern-name) and runs the command git rebase master. Assuming their bead pattern is completely unique, the process will run and the elf\u2019s beads will be restrung on the tip of the main workshop garland.\nSometimes the magic fails and the elf has to deal with merge conflicts. These are kind of annoying, so the elf uses a special inspector tool to figure things out. The elf opens the inspector by running the command git mergetool to work through places where their beads have been added at the same points as another elf\u2019s beads. Once all the conflicts are resolved, the elf saves their work, and quits the inspector. They might need to do this a few times if there are a lot of new beads, so the elf has learned to follow this update process regularly instead of just waiting until they\u2019re ready to close out their work order.\nOnce their link is up to date, the elf can now reapply their chain as before, publish their work to the main workshop garland, and close their work order:\ngit checkout master\ngit merge --ff-only 1234_pattern-name\ngit push origin master\nGenerally this process works well for the elves. Sometimes, though, when they\u2019re tired or bored or a little drunk on festive cheer, they realise there\u2019s a mistake in their chain of beads. Fortunately they can fix the beads without anyone else knowing. These tools can be applied to the whole workshop chain as well, but it causes problems because the magic assumes that elves are only ever adding to the main chain, not removing or reordering beads on the fly. Depending on where the mistake is, the elf has a few different options.\nLet\u2019s pretend the elf has a sequence of five beads she\u2019s been working on. The work order says the pattern should be red-blue-red-blue-red.\n\nIf the sequence of beads is wrong (for example, blue-blue-red-red-red), the elf can remove the beads from the chain, but keep the beads in her workstation using the command git reset --soft HEAD~5.\n\nIf she\u2019s been using the wrong colours and the wrong pattern (for example, green-green-yellow-yellow-green), she can remove the beads from her chain and discard them from her workstation using the command git reset --hard HEAD~5.\n\nIf one of the beads is missing (for example, red-blue-blue-red), she can restring the beads using the first method, or she can use a bit of magic to add the missing bead into the sequence.\n\nUsing a tool that\u2019s a bit like orthoscopic surgery, she first selects a sequence of beads which contains the problem. A visualisation of this process is available.\nStart the garland surgery process with the command:\ngit rebase --interactive HEAD~4\nA new screen comes up with the following information (the oldest bead is on top):\npick c2e4877 Red bead\npick 9b5555e Blue bead\npick 7afd66b Blue bead\npick e1f2537 Red bead\nThe elf adjusts the list, changing \u201cpick\u201d to \u201cedit\u201d next to the first blue bead:\npick c2e4877 Red bead\nedit 9b5555e Blue bead\npick 7afd66b Blue bead\npick e1f2537 Red bead\nShe then saves her work and quits the editor. The garland magic has placed her back in time at the moment just after she added the first blue bead.\n\nShe needs to manually fix up her garland to add the new red bead. If the beads were files, she might run commands like vim beads.txt and edit the file to make the necessary changes.\nOnce she\u2019s finished her changes, she needs to add her new bead to the garland (git add --all) and lock it into place (git commit). This time she assigns the commit message \u201cRed bead \u2013 added\u201d so she can easily find it.\n\nThe garland magic has replaced the bead, but she still needs to verify the remaining beads on the garland. This is a mostly automatic process which is started by running the command git rebase --continue.\nThe new red bead has been assigned a position formerly held by the blue bead, and so the elf must deal with a merge conflict. She opens up a new program to help resolve the conflict by running git mergetool.\n\nShe knows she wants both of these beads in place, so the elf edits the file to include both the red and blue beads.\n\nWith the conflict resolved, the elf saves her changes and quits the mergetool.\nBack at the command line, the elf checks the status of her work using the command git status.\nrebase in progress; onto 4a9cb9d\nYou are currently rebasing branch '2_RBRBR' on '4a9cb9d'.\n (all conflicts fixed: run \"git rebase --continue\")\n\nChanges to be committed:\n (use \"git reset HEAD ...\" to unstage)\n\n modified: beads.txt\n\nUntracked files:\n (use \"git add ...\" to include in what will be committed)\n\n beads.txt.orig\nShe removes the file added by the mergetool with the command rm beads.txt.orig and commits the edits she just made to the bead file using the commands:\ngit add beads.txt\ngit commit --message \"Blue bead -- resolved conflict\"\n\nWith the conflict resolved, the elf is able to continue with the rebasing process using the command git rebase --continue. There is one final conflict the elf needs to resolve. Once again, she opens up the visualisation tool and takes a look at the two conflicting files.\n\nShe incorporates the changes from the left and right column to ensure her bead sequence is correct.\n\nOnce the merge conflict is resolved, the elf saves the file and quits the mergetool. Once again, she cleans out the backup file added by the mergetool (rm beads.txt.orig) and commits her changes to the garland:\ngit add beads.txt\ngit commit --message \"Red bead -- resolved conflict\"\nand then runs the final verification steps in the rebase process (git rebase --continue).\n\nThe verification process runs through to the end, and the elf checks her work using the command git log --oneline.\n9269914 Red bead -- resolved conflict\n4916353 Blue bead -- resolved conflict\naef0d5c Red bead -- added\n9b5555e Blue bead\nc2e4877 Red bead\nShe knows she needs to read the sequence from bottom to top (the oldest bead is on the bottom). Reviewing the list she sees that the sequence is now correct.\nSometimes, late at night, the elf makes new copies of the workshop garland so she can play around with the bead sequencer just to see what happens. It\u2019s made her more confident at restringing beads when she\u2019s found real mistakes. And she doesn\u2019t mind helping her fellow elves when they run into trouble with their beads. The sugar cookies they leave her as thanks don\u2019t hurt either. If you would also like to play with the bead sequencer, you can get a copy of the branches the elf worked.\n\nOur lessons from the workshop:\n\nBy using rebase to update your branches, you avoid merge commits and keep a clean commit history.\nIf you make a mistake on one of your local branches, you can use reset to take commits off your branch. If you want to save the work, but uncommit it, add the parameter --soft. If you want to completely discard the work, use the parameter, --hard.\nIf you have merged working branch changes to the local copy of your master branch and it is preventing you from pushing your work to a remote repository, remove these changes using the command reset with the parameter --merge ORIG_HEAD before updating your local copy of the remote master branch.\nIf you want to make a change to work that was committed a little while ago, you can use the command rebase with the parameter --interactive. You will need to include how many commits back in time you want to review.", "year": "2015", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2015-12-07T00:00:00+00:00", "url": "https://24ways.org/2015/git-rebasing/", "topic": "code"} {"rowid": 53, "title": "Get Expressive with Your Typography", "contents": "In 1955 Beatrice Warde, an American communicator on typography, published a series of essays entitled The Crystal Goblet in which she wrote, \u201cPeople who love ideas must have a love of words. They will take a vivid interest in the clothes that words wear.\u201d And with that proposition Warde introduced the idea that just as we judge someone based on the clothes they are wearing, so we make judgements about text based on the typefaces in which it is set.\nBeatrice Warde. \u00a91970 Monotype Imaging Inc.\nChoosing the same typeface as everyone else, especially if you\u2019re trying to make a statement, is like turning up to a party in the same dress; to a meeting in the same suit, shirt and tie; or to a craft ale dispensary in the same plaid shirt and turned-up skinny jeans.\nBut there\u2019s more to your choice of typeface than simply making an impression. In 2012 Jon Tan wrote on 24 ways about a scientific study called \u201cThe Aesthetics of Reading\u201d which concluded that \u201cgood quality typography is responsible for greater engagement during reading and thus induces a good mood.\u201d\nFurthermore, at this year\u2019s Ampersand conference Sarah Hyndman, an expert in multisensory typography, discussed how typefaces can communicate with our subconscious. Sarah showed that different fonts could have an effect on how food tasted. A rounded font placed near a bowl of jellybeans would make them taste sweeter, and a jagged angular font would make them taste more sour. \nThe quality of your typography can therefore affect the mood of your reader, and your font choice directly affect the senses. This means you can manipulate the way people feel. You can change their emotional state through type alone. Now that\u2019s a real superpower!\nThe effects of your body text design choices are measurable but subtle. If you really want to have an impact you need to think big. Literally. Display text and headings are your attention grabbers. They are your chance to interrupt, introduce and seduce.\nDisplay text and headings set the scene and draw people in. Text set large creates an image that visitors see before they read, and that\u2019s your chance to choose a typeface that immediately expresses what the text, and indeed the entire website, stands for. What expectations of the text do you want to set up? Youthful enthusiasm? Businesslike? Cutting-edge? Hipster? Sensible and secure? Fun and informal? Authoritarian?\nTypography conveys much more than just information. It imparts feeling, emotion and sentiment, and arouses preconceived ideas of trust, tone and content. Think about taking advantage of this by introducing impactful, expressive typography to your designs on the web. You can alter the way your reader feels, so what emotion do you want to provoke?\nMaybe you want them to feel inspired like this stop smoking campaign:\nhelsenorge.no\nPerhaps they should be moved and intrigued, as with Makeshift magazine:\nmkshft.org\nOr calmly reassured:\nwww.cleopatra-marina.gr\nFonts also tap into the complex library of associations that we\u2019ve been accumulating in our brains all of our lives. You build up these associations every time you see a font from the context that you see it in. All of us associate certain letterforms with topics, times and places.\nRetiro is obviously Spanish:\nRetiro by Typofonderie\nBodoni and Eurostile used in this menu couldn\u2019t be much more Italian:\nBodoni and Eurostile, both designed in Italy\nTo me, Clarendon gives a sense of the 1960s and 1970s. I\u2019m not sure if that\u2019s what Costa was going for, but that\u2019s what it means to me:\nCosta coffee flier\nAnd Knockout and Gotham really couldn\u2019t be much more American:\nKnockout and Gotham by Hoefler & Co\nWhen it comes to choosing your display typeface, the type designer Christian Schwartz says there are two kinds. First are the workhorse typefaces that will do whatever you want them to do. Helvetica, Proxima Nova and Futura are good examples. These fonts can be shaped in many different ways, but this also means they are found everywhere and take great skill and practice to work with in a unique and striking manner.\nThe second kind of typeface is one that does most of the work for you. Like finely tailored clothing, it\u2019s the detail in the design that adds interest.\nSetting headings in Bree rather than Helvetica makes a big difference to the tone of the article\nSuch typefaces carry much more inherent character, but are also less malleable and harder to adapt to different contexts. Good examples are Marr Sans, FS Clerkenwell, Strangelove and Bree.\nPush the boat out\nRemember, all type can have an effect on the reader. Take advantage of that and allow your type to have its own vernacular and impact. Be expressive with your type. Don\u2019t be too reverential, dogmatic \u2013 or ordinary. Be brave and push a few boundaries.\nAdapted from Web Typography a book in progress by Richard Rutter.", "year": "2015", "author": "Richard Rutter", "author_slug": "richardrutter", "published": "2015-12-04T00:00:00+00:00", "url": "https://24ways.org/2015/get-expressive-with-your-typography/", "topic": "design"} {"rowid": 54, "title": "Putting My Patterns through Their Paces", "contents": "Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work.\nThe thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me.\nMeet the Teaser\nHere\u2019s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.)\nIn its simplest form, a teaser contains a headline, which links to an article:\n\nFairly straightforward, sure. But it\u2019s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types \u2013 some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile.\n\n Nearly every element visible on this page is built out of our generic \u201cteaser\u201d pattern.\n \nBut the teaser variation I\u2019d like to call out is the one that appears on The Toast\u2019s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline.\n\n The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts.\n \nAnd this is, as it happens, the teaser variation that gave me pause. Back in the old days \u2013 you know, like six months ago \u2013 I probably would\u2019ve marked this module up to match the design. In other words, I would\u2019ve looked at the module\u2019s visual hierarchy (metadata up top, headline and content below) and written the following HTML:\n
\n \n 126 comments\n

Article Title

\n

Lorem ipsum dolor sit amet, consectetur\u2026

\n
\nBut then I caught myself, and realized this wasn\u2019t the best approach.\nMoving Beyond Layout\nSince I\u2019ve started working responsively, there\u2019s a question I work into every step of my design process. Whether I\u2019m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself:\n\nWhat if someone doesn\u2019t browse the web like I do?\n\n\u2026Okay, that doesn\u2019t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it\u2019s been invaluable to so many aspects of my practice. If I\u2019m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I\u2019m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It\u2019s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web.\nAnd that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it\u2019d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they\u2019re associated.\nThat\u2019s why I find it\u2019s helpful to begin designing a pattern\u2019s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I\u2019m trying to support. In other words, if someone\u2019s encountering my design without the CSS I\u2019ve written, what should their experience be?\nSo I took a step back, and came up with a different approach:\n
\n

Article Title

\n \n

\n Lorem ipsum dolor sit amet, consectetur\u2026\n 126 comments\n

\n
\nMuch, much better. This felt like a better match for the content I was designing: the headline \u2013 easily most important element \u2013 was at the top, followed by the author\u2019s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that\u2019s why it\u2019s at the very end of the excerpt, the last element within our teaser. And with some light styling, we\u2019ve got a respectable-looking hierarchy in place:\n\nYeah, you\u2019re right \u2013 it\u2019s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we\u2019ll bolster the markup with an extra element around our title and byline:\n
\n \n \u2026\n
\nWith that in place, we can use flexbox to tweak our layout, like so:\n.teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\nflex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children.\n\nGetting closer! But as great as flexbox is, it doesn\u2019t do anything for elements outside our container, like our little comment count, which is, as you\u2019ve probably noticed, still stranded at the very bottom of our teaser.\nFlexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser\u2019s using a sufficiently modern version of flexbox. Here\u2019s the one we used:\nvar doc = document.body || document.documentElement;\nvar style = doc.style;\n\nif ( style.webkitFlexWrap == '' ||\n style.msFlexWrap == '' ||\n style.flexWrap == '' ) {\n doc.className += \" supports-flex\";\n}\nEagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser\u2019s design:\n.supports-flex .teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\n.supports-flex .teaser .comment-count {\n position: absolute;\n right: 0;\n top: 1.1em;\n}\nIf the supports-flex class is present, we can apply our flexbox layout to the title area, sure \u2013 but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don\u2019t meet our threshold for our advanced styles are left with an attractive design that matches our HTML\u2019s content hierarchy; but the ones that pass our test receive the finished, final design.\n\nAnd with that, our teaser\u2019s complete.\nDiving Into Device-Agnostic Design\nThis is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I\u2019d recommend Heydon Pickering\u2019s \u201cFlexbox Grid Finesse\u201d, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you\u2019d be right! In fact, it\u2019s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I\u2019ve found that thinking about my design as existing in broad experience tiers \u2013 in layers \u2013 is one of the best ways of designing for the modern web. And what\u2019s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well.\nOpen video\n \n Even a simple search form can be conditionally enhanced, given a little layered thinking.\n \nThis more layered approach to interface design isn\u2019t a new one, mind you: it\u2019s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I\u2019ve found to practice responsive design. As Trent Walton once wrote,\n\nLike cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web\u2019s inherent variability.\n\nWe have a weird job, working on the web. We\u2019re designing for the latest mobile devices, sure, but we\u2019re increasingly aware that our definition of \u201csmartphone\u201d is much too narrow. Browsers have started appearing on our wrists and in our cars\u2019 dashboards, but much of the world\u2019s mobile data flows over sub-3G networks. After all, the web\u2019s evolution has never been charted along a straight line: it\u2019s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don\u2019t yet know about, a more device-agnostic, more layered design process can better prepare our patterns \u2013 and ourselves \u2013 for the future.\n(It won\u2019t help you get enough to eat at holiday parties, though.)", "year": "2015", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2015-12-10T00:00:00+00:00", "url": "https://24ways.org/2015/putting-my-patterns-through-their-paces/", "topic": "code"} {"rowid": 55, "title": "How Tabs Should Work", "contents": "Tabs in browsers (not browser tabs) are one of the oldest custom UI elements in a browser that I can think of. They\u2019ve been done to death. But, sadly, most of the time I come across them, the tabs have been badly, or rather partially, implemented.\nSo this post is my definition of how a tabbing system should work, and one approach of implementing that.\nBut\u2026 tabs are easy, right?\nI\u2019ve been writing code for tabbing systems in JavaScript for coming up on a decade, and at one point I was pretty proud of how small I could make the JavaScript for the tabbing system:\nvar tabs = $('.tab').click(function () {\n tabs.hide().filter(this.hash).show();\n}).map(function () {\n return $(this.hash)[0];\n});\n\n$('.tab:first').click();\nSimple, right? Nearly fits in a tweet (ignoring the whole jQuery library\u2026). Still, it\u2019s riddled with problems that make it a far from perfect solution.\nRequirements: what makes the perfect tab?\n\nAll content is navigable and available without JavaScript (crawler-compatible and low JS-compatible).\nARIA roles.\nThe tabs are anchor links that:\n\nare clickable\nhave block layout\nhave their href pointing to the id of the panel element\nuse the correct cursor (i.e. cursor: pointer).\n\nSince tabs are clickable, the user can open in a new tab/window and the page correctly loads with the correct tab open.\nRight-clicking (and Shift-clicking) doesn\u2019t cause the tab to be selected.\nNative browser Back/Forward button correctly changes the state of the selected tab (think about it working exactly as if there were no JavaScript in place).\n\nThe first three points are all to do with the semantics of the markup and how the markup has been styled. I think it\u2019s easy to do a good job by thinking of tabs as links, and not as some part of an application. Links are navigable, and they should work the same way other links on the page work.\nThe last three points are JavaScript problems. Let\u2019s investigate that.\nThe shitmus test\nLike a litmus test, here\u2019s a couple of quick ways you can tell if a tabbing system is poorly implemented:\n\nChange tab, then use the Back button (or keyboard shortcut) and it breaks\nThe tab isn\u2019t a link, so you can\u2019t open it in a new tab\n\nThese two basic things are, to me, the bare minimum that a tabbing system should have.\nWhy is this important?\nThe people who push their so-called native apps on users can\u2019t have more reasons why the web sucks. If something as basic as a tab doesn\u2019t work, obviously there\u2019s more ammo to push a closed native app or platform on your users.\nIf you\u2019re going to be a web developer, one of your responsibilities is to maintain established interactivity paradigms. This doesn\u2019t mean don\u2019t innovate. But it does mean: stop fucking up my scrolling experience with your poorly executed scroll effects. :breath:\nURI fragment, absolute URL or query string?\nA URI fragment (AKA the # hash bit) would be using mysite.com/config#content to show the content panel. A fully addressable URL would be mysite.com/config/content. Using a query string (by way of filtering the page): mysite.com/config?tab=content.\nThis decision really depends on the context of your tabbing system. For something like GitHub\u2019s tabs to view a pull request, it makes sense that the full URL changes.\nFor our problem though, I want to solve the issue when the page doesn\u2019t do a full URL update; that is, your regular run-of-the-mill tabbing system.\nI used to be from the school of using the hash to show the correct tab, but I\u2019ve recently been exploring whether the query string can be used. The biggest reason is that multiple hashes don\u2019t work, and comma-separated hash fragments don\u2019t make any sense to control multiple tabs (since it doesn\u2019t actually link to anything).\nFor this article, I\u2019ll keep focused on using a single tabbing system and a hash on the URL to control the tabs.\nMarkup\nI\u2019m going to assume subcontent, so my markup would look like this (yes, this is a cat demo\u2026):\n\n\n
\n \n
\n
\n \n
\n
\n \n
\nIt\u2019s important to note that in the markup the link used for an individual tab references its panel content using the hash, pointing to the id on the panel. This will allow our content to connect up without JavaScript and give us a bunch of features for free, which we\u2019ll see once we\u2019re on to writing the code.\nURL-driven tabbing systems\nInstead of making the code responsive to the user\u2019s input, we\u2019re going to exclusively use the browser URL and the hashchange event on the window to drive this tabbing system. This way we get Back button support for free.\nWith that in mind, let\u2019s start building up our code. I\u2019ll assume we have the jQuery library, but I\u2019ve also provided the full code working without a library (vanilla, if you will), but it depends on relatively new (polyfillable) tech like classList and dataset (which generally have IE10 and all other browser support).\nNote that I\u2019ll start with the simplest solution, and I\u2019ll refactor the code as I go along, like in places where I keep calling jQuery selectors.\nfunction show(id) {\n // remove the selected class from the tabs,\n // and add it back to the one the user selected\n $('.tab').removeClass('selected').filter(function () {\n return (this.hash === id);\n }).addClass('selected');\n\n // now hide all the panels, then filter to\n // the one we're interested in, and show it\n $('.panel').hide().filter(id).show();\n}\n\n$(window).on('hashchange', function () {\n show(location.hash);\n});\n\n// initialise by showing the first panel\nshow('#dizzy');\nThis works pretty well for such little code. Notice that we don\u2019t have any click handlers for the user and the Back button works right out of the box.\nHowever, there\u2019s a number of problems we need to fix:\n\nThe initialised tab is hard-coded to the first panel, rather than what\u2019s on the URL.\nIf there\u2019s no hash on the URL, all the panels are hidden (and thus broken).\nIf you scroll to the bottom of the example, you\u2019ll find a \u201ctop\u201d link; clicking that will break our tabbing system.\nI\u2019ve purposely made the page long, so that when you click on a tab, you\u2019ll see the page scrolls to the top of the tab. Not a huge deal, but a bit annoying.\n\nFrom our criteria at the start of this post, we\u2019ve already solved items 4 and 5. Not a terrible start. Let\u2019s solve items 1 through 3 next.\nUsing the URL to initialise correctly and protect from breakage\nInstead of arbitrarily picking the first panel from our collection, the code should read the current location.hash and use that if it\u2019s available.\nThe problem is: what if the hash on the URL isn\u2019t actually for a tab?\nThe solution here is that we need to cache a list of known panel IDs. In fact, well-written DOM scripting won\u2019t continuously search the DOM for nodes. That is, when the show function kept calling $('.tab').each(...) it was wasteful. The result of $('.tab') should be cached.\nSo now the code will collect all the tabs, then find the related panels from those tabs, and we\u2019ll use that list to double the values we give the show function (during initialisation, for instance).\n// collect all the tabs\nvar tabs = $('.tab');\n\n// get an array of the panel ids (from the anchor hash)\nvar targets = tabs.map(function () {\n return this.hash;\n}).get();\n\n// use those ids to get a jQuery collection of panels\nvar panels = $(targets.join(','));\n\nfunction show(id) {\n // if no value was given, let's take the first panel\n if (!id) {\n id = targets[0];\n }\n // remove the selected class from the tabs,\n // and add it back to the one the user selected\n tabs.removeClass('selected').filter(function () {\n return (this.hash === id);\n }).addClass('selected');\n\n // now hide all the panels, then filter to\n // the one we're interested in, and show it\n panels.hide().filter(id).show();\n}\n\n$(window).on('hashchange', function () {\n var hash = location.hash;\n if (targets.indexOf(hash) !== -1) {\n show(hash);\n }\n});\n\n// initialise\nshow(targets.indexOf(location.hash) !== -1 ? location.hash : '');\nThe core of working out which tab to initialise with is solved in that last line: is there a location.hash? Is it in our list of valid targets (panels)? If so, select that tab.\nThe second breakage we saw in the original demo was that clicking the \u201ctop\u201d link would break our tabs. This was due to the hashchange event firing and the code didn\u2019t validate the hash that was passed. Now this happens, the panels don\u2019t break.\nSo far we\u2019ve got a tabbing system that:\n\nWorks without JavaScript.\nSupports right-click and Shift-click (and doesn\u2019t select in these cases).\nLoads the correct panel if you start with a hash.\nSupports native browser navigation.\nSupports the keyboard.\n\nThe only annoying problem we have now is that the page jumps when a tab is selected. That\u2019s due to the browser following the default behaviour of an internal link on the page. To solve this, things are going to get a little hairy, but it\u2019s all for a good cause.\nRemoving the jump to tab\nYou\u2019d be forgiven for thinking you just need to hook a click handler and return false. It\u2019s what I started with. Only that\u2019s not the solution. If we add the click handler, it breaks all the right-click and Shift-click support.\nThere may be another way to solve this, but what follows is the way I found \u2013 and it works. It\u2019s just a bit\u2026 hairy, as I said.\nWe\u2019re going to strip the id attribute off the target panel when the user tries to navigate to it, and then put it back on once the show code starts to run. This change will mean the browser has nowhere to navigate to for that moment, and won\u2019t jump the page.\nThe change involves the following:\n\nAdd a click handle that removes the id from the target panel, and cache this in a target variable that we\u2019ll use later in hashchange (see point 4).\nIn the same click handler, set the location.hash to the current link\u2019s hash. This is important because it forces a hashchange event regardless of whether the URL actually changed, which prevents the tabs breaking (try it yourself by removing this line).\nFor each panel, put a backup copy of the id attribute in a data property (I\u2019ve called it old-id).\nWhen the hashchange event fires, if we have a target value, let\u2019s put the id back on the panel.\n\nThese changes result in this final code:\n/*global $*/\n\n// a temp value to cache *what* we're about to show\nvar target = null;\n\n// collect all the tabs\nvar tabs = $('.tab').on('click', function () {\n target = $(this.hash).removeAttr('id');\n\n // if the URL isn't going to change, then hashchange\n // event doesn't fire, so we trigger the update manually\n if (location.hash === this.hash) {\n // but this has to happen after the DOM update has\n // completed, so we wrap it in a setTimeout 0\n setTimeout(update, 0);\n }\n});\n\n// get an array of the panel ids (from the anchor hash)\nvar targets = tabs.map(function () {\n return this.hash;\n}).get();\n\n// use those ids to get a jQuery collection of panels\nvar panels = $(targets.join(',')).each(function () {\n // keep a copy of what the original el.id was\n $(this).data('old-id', this.id);\n});\n\nfunction update() {\n if (target) {\n target.attr('id', target.data('old-id'));\n target = null;\n }\n\n var hash = window.location.hash;\n if (targets.indexOf(hash) !== -1) {\n show(hash);\n }\n}\n\nfunction show(id) {\n // if no value was given, let's take the first panel\n if (!id) {\n id = targets[0];\n }\n // remove the selected class from the tabs,\n // and add it back to the one the user selected\n tabs.removeClass('selected').filter(function () {\n return (this.hash === id);\n }).addClass('selected');\n\n // now hide all the panels, then filter to\n // the one we're interested in, and show it\n panels.hide().filter(id).show();\n}\n\n$(window).on('hashchange', update);\n\n// initialise\nif (targets.indexOf(window.location.hash) !== -1) {\n update();\n} else {\n show();\n}\nThis version now meets all the criteria I mentioned in my original list, except for the ARIA roles and accessibility. Getting this support is actually very cheap to add.\nARIA roles\nThis article on ARIA tabs made it very easy to get the tabbing system working as I wanted.\nThe tasks were simple:\n\nAdd aria-role set to tab for the tabs, and tabpanel for the panels.\nSet aria-controls on the tabs to point to their related panel (by id).\nI use JavaScript to add tabindex=0 to all the tab elements.\nWhen I add the selected class to the tab, I also set aria-selected to true and, inversely, when I remove the selected class I set aria-selected to false.\nWhen I hide the panels I add aria-hidden=true, and when I show the specific panel I set aria-hidden=false.\n\nAnd that\u2019s it. Very small changes to get full sign-off that the tabbing system is bulletproof and accessible.\nCheck out the final version (and the non-jQuery version as promised).\nIn conclusion\nThere\u2019s a lot of tab implementations out there, but there\u2019s an equal amount that break the browsing paradigm and the simple linkability of content. Clearly there\u2019s a special hell for those tab systems that don\u2019t even use links, but I think it\u2019s clear that even in something that\u2019s relatively simple, it\u2019s the small details that make or break the user experience.\nObviously there are corners I\u2019ve not explored, like when there\u2019s more than one set of tabs on a page, and equally whether you should deliver the initial markup with the correct tab selected. I think the answer lies in using query strings in combination with hashes on the URL, but maybe that\u2019s for another year!", "year": "2015", "author": "Remy Sharp", "author_slug": "remysharp", "published": "2015-12-22T00:00:00+00:00", "url": "https://24ways.org/2015/how-tabs-should-work/", "topic": "code"} {"rowid": 56, "title": "Helping VIPs Care About Performance", "contents": "Making a site feel super fast is the easy part of performance work. Getting people around you to care about site speed is a much bigger challenge. How do we keep the site fast beyond the initial performance work? Keeping very important people like your upper management or clients invested in performance work is critical to keeping a site fast and empowering other designers and developers to contribute.\nThe work to get others to care is so meaty that I dedicated a whole chapter to the topic in my book Designing for Performance. When I speak at conferences, the majority of questions during Q&A are on this topic. When I speak to developers and designers who care about performance, getting other people at one\u2019s organization or agency to care becomes the most pressing question.\nMy primary response to folks who raise this issue is the question: \u201cWhat metric(s) do your VIPs care about?\u201d This is often met with blank stares and raised eyebrows. But it\u2019s also our biggest clue to what we need to do to help empower others to care about performance and work on it. Every organization and executive is different. This means that three major things vary: the primary metrics VIPs care about; the language they use about measuring success; and how change is enacted. By clueing in to these nuances within your organization, you can get a huge leg up on crafting a successful pitch about performance work.\nLet\u2019s start with the metric that we should measure. Sure, (most) everybody cares about money - but is that really the metric that your VIPs are looking at each day to measure the success or efficacy of your site? More likely, dollars are the end game, but the metrics or key performance indicators (KPIs) people focus on might be:\n\nrate of new accounts created/signups\ncost of acquiring or retaining a customer\nvisitor return rate\nvisitor bounce rate\nfavoriting or another interaction rate\n\nThese are just a few examples, but they illustrate how wide-ranging the options are that people care about. I find that developers and designers haven\u2019t necessarily investigated this when trying to get others to care about performance. We often reach for the obvious \u2013 money! \u2013 but if we don\u2019t use the same kind of language our VIPs are using, we might not get too far. You need to know this before you can make the case for performance work.\nTo find out these metrics or KPIs, start reading through the emails your VIPs are sending within your company. What does it say on company wikis? Are there major dashboards internally that people are looking at where you could find some good metrics? Listen intently in team meetings or thoroughly read annual reports to see what these metrics could be.\nThe second key here is to pick up on language you can effectively copy and paste as you make the case for performance work. You need to be able to reflect back the metrics that people already find important in a way they\u2019ll be able to hear. Once you know your key metrics, it\u2019s time to figure out how to communicate with your VIPs about performance using language that will resonate with them.\nLet\u2019s start with visit traffic as an example metric that a very important person cares about. Start to dig up research that other people and companies have done that correlates performance and your KPI. For example, cite studies:\n\n\u201cWhen the home page of Google Maps was reduced from 100KB to 70\u201380KB, traffic went up 10% in the first week, and an additional 25% in the following three weeks.\u201d (source).\n\nRead through websites like WPOStats, which collects the spectrum of studies on the impact of performance optimization on user experience and business metrics. Tweet and see if others have done similar research that correlates performance and your site\u2019s main KPI.\nOnce you have collected some research that touches on the same kind of language your VIPs use about the success of your site, it\u2019s time to present it. You can start with something simple, like a qualitative description of the work you\u2019re actively doing to improve the site that translates to improved metrics that your VIPs care about. It can be helpful to append a performance budget to any proposal so you can compare the budget to your site\u2019s reality and how it might positively impact those KPIs folks care about.\nWords and graphs are often only half the battle when it comes to getting others to care about performance. Often, videos appeal to folks\u2019 emotions in a way that is missed when glancing through charts and graphs. On A List Apart I recently detailed how to create videos of how fast your site loads. Let\u2019s say that your VIPs care about how your site loads on mobile devices; it\u2019s time to show them how your site loads on mobile networks.\nOpen video\n\nYou can use these videos to make a number of different statements to your VIPs, depending on what they care about:\n\nLook at how slow our site loads versus our competitor!\nLook at how slow our site loads for users in another country!\nLook at how slow our site loads on mobile networks!\n\nAgain, you really need to know which metrics your VIPs care about and tune into the language they\u2019re using. If they don\u2019t care about the overall user experience of your site on mobile devices, then showing them how slow your site loads on 3G isn\u2019t going to work. This will be your sales pitch; you need to practice and iterate on the language and highlights that will land best with your audience. \nTo make your sales pitch as solid as possible, gut-check your ideas on how to present it with other co-workers to get their feedback. Read up on how to construct effective arguments and deliver them; do some research and see what others have done at your company when pitching to VIPs. Are slides effective? Memos or emails? Hallway conversations? Sometimes the best way to change people\u2019s minds is by mentioning it in informal chats over coffee. Emulate the other leaders in your organization who are successful at this work. \nEvery organization and very important person is different. Learn what metrics folks truly care about, study the language that they use, and apply what you\u2019ve learned in a way that\u2019ll land with those individuals. It may take time to craft your pitch for performance work over time, but it\u2019s important work to do. If you\u2019re able to figure out how to mirror back the language and metrics VIPs care about, and connect the dots to performance for them, you will have a huge leg up on keeping your site fast in the long run.", "year": "2015", "author": "Lara Hogan", "author_slug": "larahogan", "published": "2015-12-08T00:00:00+00:00", "url": "https://24ways.org/2015/helping-vips-care-about-performance/", "topic": "business"} {"rowid": 57, "title": "Cooking Up Effective Technical Writing", "contents": "Merry Christmas! May your preparations for this festive season of gluttony be shaping up beautifully. By the time you read this I hope you will have ordered your turkey, eaten twice your weight in Roses/Quality Street (let\u2019s not get into that argument), and your Christmas cake has been baked and is now quietly absorbing regular doses of alcohol.\nSome of you may be reading this and scoffing Of course! I\u2019ve also made three batches of mince pies, a seasonal chutney and enough gingerbread men to feed the whole street! while others may be laughing Bake? Oh no, I can\u2019t cook to save my life.\nFor beginners, recipes are the step-by-step instructions that hand-hold us through the cooking process, but even as a seasoned expert you\u2019re likely to refer to a recipe at some point. Recipes tell us what we need, what to do with it, in what order, and what the outcome will be. It\u2019s the documentation behind our ideas, and allows us to take the blueprint for a tasty morsel and to share it with others so they can recreate it. In fact, this is a little like the open source documentation and tutorials that we put out there, similarly aiming to guide other developers through our creations.\nThe \u2018just\u2019ification of documentation\nLately it feels like we\u2019re starting to consider the importance of our words, and the impact they can have on others. Brad Frost warned us of the dangers of \u201cJust\u201d when it comes to offering up solutions to queries:\n\n\u201cJust use this software/platform/toolkit/methodology\u2026\u201d\n\u201cJust\u201d makes me feel like an idiot. \u201cJust\u201d presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources. \u201cJust\u201d is a dangerous word.\n\u201cJust\u201d by Brad Frost\n\nI can really empathise with these sentiments. My relationship with code started out as many good web tales do, with good old HTML, CSS and JavaScript. University years involved some time with Perl, PHP, Java and C. In my first job I worked primarily with ColdFusion, a bit of ActionScript, some classic\u00a0 ASP and pinch of Java. I\u2019d do a bit of PHP outside work every now and again. .NET came in, but we never really got on, and eventually I started learning some Ruby, Python and Node. It was a broad set of learnings, and I enjoyed the similarities and differences that came with new languages. I don\u2019t develop day in, day out any more, and my interests and work have evolved over the years, away from full-time development and more into architecture and strategy. But I still make things, and I still enjoy learning.\nI have often found myself bemoaning the lack of tutorials or courses that cater for the middle level \u2013 someone who may be learning a new language, but who has enough programming experience under their belt to not need to revise the concepts of how loops or objects work, and is perfectly adept at googling the syntax for getting a substring. I don\u2019t want snippets out of context; I want an understanding of architectural principles, of the strengths and weaknesses, of the type of applications that work well with the language.\nI\u2019m caught in the place between snoozing off when \u2018Using the Instagram API with Ruby\u2019 hand-holds me through what REST is, and feeling like I\u2019m stupid and need to go back to dev school when I can\u2019t get my environment and dependencies set up, let alone work out how I\u2019m meant to get any code to run.\nIt\u2019s seems I\u2019m not alone with this \u2013 Erin McKean seems to have been here too:\n\n\u201cSome tutorials (especially coding tutorials) like to begin things in media res. Great for a sense of dramatic action, bad for getting to \u201cStep 1\u201d without tears. It can be really discouraging to fire up a fresh terminal window only to be confronted by error message after error message because there were obligatory steps 0.1.0 through 0.9.9 that you didn\u2019t even know about.\u201d\n\u201cTips for Learning What You Don\u2019t Know You Don\u2019t Know\u201d by Erin McKean\n\nI\u2019m sure you\u2019ve been here too. Many tutorials suffer badly from the fabled \u2018how to draw an owl\u2019-itis.\n\nIt\u2019s the kind of feeling you can easily get when sifting through recipes as well as with code. Far from being the simple instructions that let us just follow along, they too can be a minefield. Fall in too low and you may be skipping over an explanation of what simmering is, or set your sights too high and you may get stuck at the point where you\u2019re trying to sous vide a steak using your bathtub and a Ziploc bag.\nDon\u2019t be a turkey, use your loaf!\nMy mum is a great cook in my eyes (aren\u2019t all mums?). I love her handcrafted collection of gathered recipes from over the years, including the one below, which is a great example of how something may make complete sense to the writer, but could be impermeable to a reader.\n\nDepending on your level of baking knowledge, you may ask: What\u2019s SR flour? What\u2019s a tsp? Should I use salted or unsalted butter? Do I use sticks of cinnamon or ground? Why is chopped chocolate better? How do I cream things? How big should the balls be? How well is \u201cwell spaced\u201d? How much leeway do I have for \u201c(ish!!)\u201d? Does the \u201c20\u201d on the other cookie note mean I\u2019ll end up with twenty? At any point, making a wrong call could lead to rubbish cookies, and lead to someone heading down the path of an I can\u2019t cook mentality.\nYou may be able to cook (or follow recipes), but you may not understand the local terms for ingredients, may not be able to acquire something and need to know what kind of substitutes you can use, or may need to actually do some prep before you jump into the main bit.\nHowever, if we look at good examples of recipes, I think there\u2019s a lot we can apply when it comes to technical writing on the web. I\u2019ve written before about the benefit of breaking documentation into small, reusable parts, and this will help us, but we can also take it a bit further. Here are my five top tips for better technical writing.\n1. Structure and standardise your information\nThink of the structure of a recipe. We very often have some common elements and they usually follow roughly the same format. We have standards and conventions that allow us to understand very quickly what a recipe is and how it should be used.\u00a0\n\nGreat recipes help their chefs know what they need to get ready in advance, both in terms of buying ingredients and putting together their kit. They then talk through the process, using appropriate language, and without making assumptions that the person can fill in any gaps for themselves; they explain why things are done the way they are. The best recipes may also suggest how you can take what you\u2019ve done and put your own spin on it. For instance, a good recipe for the simple act of boiling an egg will explain cooking time in relation to your preference for yolk gooiness. There are also different flavour combinations to try, accompaniments, or presentation suggestions.\u00a0\nBy breaking down your technical writing into similar sections, you can help your audience understand the elements they\u2019ll be working with, what they need to do once they have these, and how they can move on from your self-contained illustration.\nTitle\n \n Ensure your title is suitably descriptive and representative of the result. Getting Started with Python perhaps isn\u2019t as helpful as Learn Python: General Syntax and Basics.\n \n Result\n \n Many recipes include a couple of lines as an overview of what you\u2019ll end up with, and many include a photo of the finished dish. With our technical writing we can do the same:\n In this tutorial we\u2019re going to learn how to set up our development environment, and we\u2019ll then undertake some exercises to explore the general syntax, finishing by building a mini calculator.\n \n Ingredients\n \n What are the components we\u2019ll be working with, whether in terms of versions, environment, languages or the software packages and libraries you\u2019ll need along the way? Listing these up front gives the reader a great summary of the things they\u2019ll be using, and any gotchas.\n Being able to provide a small amount of supporting information will also help less experienced users. Ideally, explain briefly what things are and why we\u2019re using it.\n \n Prep\n \n As we heard from Erin above, not fully understanding the prep needed can be a huge source of frustration. Attempting to run a code snippet without context will often lead to failure when the prerequisites and process aren\u2019t clear. Be sure to include information around any environment set-up, installation or config you\u2019ll need to have done before you start.\nStu Robson\u2019s Simple Sass documentation aims to do this before getting into specifics, although ideally this would also include setting up Sass itself.\n \n Instructions\n \nThe body of the tutorial itself is the whole point of our writing. The next four tips will hopefully make your tutorial much more successful.\n \n Variations\n \n Like our ingredients section, as important as explaining why we\u2019re using something in this context is, it\u2019s also great to explain alternatives that could be used instead, and the impact of doing so.\n Perhaps go a step further, explaining ways that people can change what you have done in your tutorial/readme for use in different situations, or to provide further reading around next steps. What happens if they want to change your static array of demo data to use JSON, for instance? By giving some thought to follow-up questions, you can better support your readers.\n While not in a separate section, the source code for GreenSock\u2019s GSAP JS basics explains:\n We\u2019ll use a window.onload for simplicity, but typically it is best to use either jQuery\u2019s $(document).ready() or $(window).load() or cross-browser event listeners so that you\u2019re not limited to one.\n Keep in mind to both:\n Explain what variations are possible.\n Explain why certain options may be more desirable than others in different situations.\n \n \n2. Small, reusable components\nReusable components are for life, not just for Christmas, and they\u2019re certainly not just for development. If you start to apply the structure above to your writing, you\u2019re probably going to keep coming across the same elements: Do I really have to explain how to install Sass and Node.js again, Sally? The danger with more clarity is that our writing becomes bloated and overly convoluted for advanced readers, those who don\u2019t need to be told how to beat an egg for the hundredth time.\u00a0\nInstead, by making our writing reusable and modular, and by creating smaller, central resources, we can provide context and extra detail where needed without diluting our core message. These could be references we create, or those already created well by others.\n\nThis recipe for katsudon makes use of this concept. Rather than explaining how to make tonkatsu or dashi stock, these each have their own page. Once familiar, more advanced readers will likely skip over the instructions for the component parts.\n\n3. Provide context to aid accessibility\nHere I\u2019m talking about accessibility in the broadest sense. Small, isolated snippets can be frustrating to those who don\u2019t fully understand the wider context of how our examples work.\nShowing an exciting standalone JavaScript function is great, but giving someone the full picture of how and when this is called, and how it should be included in relation to other HTML and CSS is even better. Giving your readers the ability to view a big picture version, and ideally the ability to download a full version of the source, will help to reduce some of the frustrations of trying to get your component to work in their set-up.\u00a0\n4. Be your own tech editor\nA good editor can be invaluable to your work, and wherever possible I\u2019d recommend that you try to get a neutral party to read over your writing. This may not always be possible, though, and you may need to rely on yourself to cast a critical eye over your work.\nThere are many tips out there around general editing, including printing out your work onto paper, or changing the font size: both will force your eyes to review it in a new light. Beyond this, I\u2019d like to encourage you to think about the following:\n\nExplain what things are. For example, instead of referencing Grunt, in the first instance perhaps reference \u201cGrunt (a JavaScript task runner that minimises repetitive activities through automation).\u201d\nExplain how you get things, even if this is a link to official installers and documentation. Don\u2019t leave your readers having to search.\nWhy are you using this approach/technology over other options?\nWhat happens if I use something else? What depends on this?\nAvoid exclusionary lingo or acronyms.\n\nAirbnb\u2019s JavaScript Style Guide includes useful pointers around their reasoning:\n\nUse computed property names when creating objects with dynamic property names.\nWhy? They allow you to define all the properties of an object in one place.\n\nThe language we use often makes assumptions, as we saw with \u201cjust\u201d. An article titled \u201cES6 for Beginners\u201d is hugely ambiguous: is this truly for beginner coders, or actually for people who have a good pre-existing understanding of JavaScript but are new to these features? Review your writing with different types of readers in mind. How might you confuse or mislead them? How can you better answer their questions?\nThis doesn\u2019t necessarily mean supporting everyone \u2013 your audience may need to have advanced skills \u2013 but even if you\u2019re providing low-level, deep-dive, reference material, trying not to make assumptions or take shortcuts will hopefully lead to better, clearer writing.\n5. A picture is worth a thousand words\u2026\n\u2026or even better: use a thousand pictures, stitched together into a quick video or animated GIF. People learn in different ways. Just as recipes often provide visual references or a video to work along with, providing your technical information with alternative demonstrations can really help get your point across. Your audience will be able to see exactly what you\u2019re doing, what they should expect as interaction responses, and what the process looks like at different points.\nThere are many, many options for recording your screen, including QuickTime Player on Mac OS X (File \u2192 New Screen Recording), GifGrabber, or Giffing Tool on Windows.\nPaul Swain, a UX designer, uses GIFs to provide additional context within his documentation, improving communication:\n\n\u201cMy colleagues (from across the organisation) love animated GIFs. Any time an interaction is referenced, it\u2019s accompanied by a GIF and a shared understanding of what\u2019s being designed. The humble GIF is worth so much more than a thousand words; and it\u2019s great for cats.\u201d\nPaul Swain\n\n\nNext time you\u2019re cooking up some instructions for readers, think back to what we can learn from recipes to help make your writing as accessible as possible. Use structure, provide reusable bitesize morsels, give some context, edit wisely, and don\u2019t scrimp on the GIFs. And above all, have a great Christmas!", "year": "2015", "author": "Sally Jenkinson", "author_slug": "sallyjenkinson", "published": "2015-12-18T00:00:00+00:00", "url": "https://24ways.org/2015/cooking-up-effective-technical-writing/", "topic": "content"} {"rowid": 58, "title": "Beyond the Style Guide", "contents": "Much like baking a Christmas cake, designing for the web involves creating an experience in layers. Starting with a solid base that provides the core experience (the fruit cake), we can add further layers, each adding refinement (the marzipan) and delight (the icing).\nDon\u2019t worry, this isn\u2019t a misplaced cake recipe, but an evaluation of modular design and the role style guides can play in acknowledging these different concerns, be they presentational or programmatic.\nThe auteur\u2019s style guide\nAlthough trained as a graphic designer, it was only when I encountered the immediacy of the web that I felt truly empowered as a designer. Given a desire to control every aspect of the resulting experience, I slowly adopted the role of an auteur, exploring every part of the web stack: front-end to back-end, and everything in between. A few years ago, I dreaded using the command line. Today, the terminal is a permanent feature in my Dock.\nIn straddling the realms of graphic design and programming, it\u2019s the point at which they meet that I find most fascinating, with each dicipline valuing the creation of effective systems, be they for communication or code efficiency. Front-end style guides live at this intersection, demonstrating both the modularity of code and the application of visual design.\nPainting by numbers\nIn our rush to build modular systems, design frameworks have grown in popularity. While enabling quick assembly, these come at the cost of originality and creative expression \u2013 perhaps one reason why we\u2019re seeing the homogenisation of web design.\nIn editorial design, layouts should accentuate content and present it in an engaging manner. Yet on the web we see a practice that seeks templated predictability. In \u2018Design Machines\u2019 Travis Gertz argued that (emphasis added):\n\nDesign systems still feel like a novelty in screen-based design. We nerd out over grid systems and modular scales and obsess over style guides and pattern libraries. We\u2019re pretty good at using them to build repeatable components and site-wide standards, but that\u2019s sort of where it ends. [\u2026] But to stop there is to ignore the true purpose and potential of a design system.\n\nUnless we consider how interface patterns fully embrace the design systems they should be built upon, style guides may exacerbate this paint-by-numbers approach, encouraging conformance and suppressing creativity.\nAnatomy of a button\nLet\u2019s take a look at that most canonical of components, the button, and consider what we might wish to document and demonstrate in a style guide.\nThe different layers of our button component.\nContent\nThe most variable aspect of any component. Content guidelines will exert the most influence here, dictating things like tone of voice (whether we should we use stiff, formal language like \u2018Submit form\u2019, or adopt a more friendly tone, perhaps \u2018Send us your message\u2019) and appropriate language. For an internationalised interface, this may also impact word length and text direction or orientation.\nStructure\nHTML provides a limited vocabulary which we can use to structure content and add meaning. For interactive elements, the choice of element can also affect its behaviour, such as whether a button submits form data or links to another page:\n\nButton text\nNote: One of the reasons I prefer to use \nAn icon font is used to display the icon and no text alternative is given. A possible solution to this problem is to use the title or aria-label attributes, which solves the alternative text use case for screen reader users:\nOpen video\n A screen reader announcing a button with a title.\nHowever, screen readers are not the only way people with and without disabilities interact with websites. For example, users can reset or change font families and sizes at will. This helps many users make websites easier to read, including people with dyslexia. Your icon font might be replaced by a font that doesn\u2019t include the glyphs that are icons. Additionally, the icon font may not load for users on slow connections, like on mobile phones inside trains, or because users decided to block external fonts altogether. The following screenshots show the mobile GitHub view with and without external fonts:\nThe mobile GitHub view with and without external fonts.\nEven if the title/aria-label approach was used, the lack of visual labels is a barrier for most people under those circumstances. One way to tackle this is using the old-fashioned img element with an appropriate alt attribute, but surprisingly not every browser displays the alternative text visually when the image doesn\u2019t load.\n\nProviding always visible text is an alternative that can work well if you have the space. It also helps users understand the meaning of the icons.\n\nThis also reads just fine in screen readers:\nOpen video\n A screen reader announcing the revised button.\nClever usability enhancements don\u2019t stop at a technical implementation level. Take the BBC iPlayer pages as an example: when a user navigates the \u201ccaptioned videos\u201d or \u201caudio description\u201d categories and clicks on one of the videos, captions or audio descriptions are automatically switched on. Small things like this enhance the usability and don\u2019t need a lot of engineering resources. It is more about connecting the usability dots for people with disabilities. Read more about the BBC iPlayer accessibility case study.\nMore information\nW3C has created several documents that make it easier to get the gist of what web accessibility is and how it can benefit everyone. You can find out \u201cHow People with Disabilities Use the Web\u201d, there are \u201cTips for Getting Started\u201d for developers, designers and content writers. And for the more seasoned developer there is a set of tutorials on web accessibility, including information on crafting accessible forms and how to use images in an accessible way.\nConclusion\nYou can only produce a web project with long-lasting accessibility if accessibility is not an afterthought. Your organization, your division, your team need to think about accessibility as something that is the foundation of your website or project. It needs to be at the same level as performance, code quality and design, and it needs the same attention. Users often don\u2019t notice when those fundamental aspects of good website design and development are done right. But they\u2019ll always know when they are implemented poorly.\nIf you take all this into consideration, you can create accessibility solutions based on the available data and bring accessibility to people who didn\u2019t know they\u2019d need it:\nOpen video\n \nIn this video from the latest Apple keynote, the Apple TV is operated by voice input through a remote. When the user asks \u201cWhat did she say?\u201d the video jumps back fifteen seconds and captions are switched on for a brief time. All three, the remote, voice input and captions have their roots in assisting people with disabilities. Now they benefit everyone.", "year": "2015", "author": "Eric Eggert", "author_slug": "ericeggert", "published": "2015-12-17T00:00:00+00:00", "url": "https://24ways.org/2015/the-accessibility-mindset/", "topic": "code"} {"rowid": 66, "title": "Solve the Hard Problems", "contents": "So, here we find ourselves on the cusp of 2016. We\u2019ve had a good year \u2013 the web is still alive, no one has switched it off yet. Clients still have websites, teenagers still have phone apps, and there continue to be plenty of online brands to meaningfully engage with each day. Good job team, high fives all round.\nAs it\u2019s the time to make resolutions, I wanted to share three small ideas to take into the new year.\nGet good at what you do\n\u201cHow do you get to Carnegie Hall?\u201d the old joke goes. \u201cPractise, practise, practise.\u201d \nWe work in an industry where there is an awful lot to learn. There\u2019s a lot to learn to get started and then once you do, there\u2019s a lot more to learn to keep your skills current. Just when you think you\u2019ve mastered something, it changes.\nThis is true of many industries, of course, but the sheer pace of change for us makes learning not an annual activity, but daily. Learning takes time, and while I\u2019m not convinced that every skill takes the fabled ten thousand hours to master, there is certainly no escaping that to remain current we must reinvest time in keeping our skills up to date.\nPicking where to spend your time\nOne of the hardest aspects of this thing of ours is just choosing what to learn. If you, like me, invested any time in learning the Less CSS preprocessor over the last few years, you\u2019ll probably now be spending your time relearning Sass instead. If you spent time learning Grunt, chances are you\u2019ll now be thinking about whether you should switch to Gulp. It\u2019s not just that there are new types of tools, there are new tools and frameworks to do the things you\u2019re already doing, but, well, differently.\nDeciding what to learn is hard and the costs of backing the wrong horse can seriously mount up; so much so that by the time you\u2019ve learned and then relearned the tools everyone says you need for your job, there\u2019s rarely enough time to spend really getting to know how best to use them.\n\u00a0Practise, practise, practise\nDo you know how you don\u2019t get to Carnegie Hall? By learning a new instrument each week. It takes time and experience to really learn something well. That goes for a new JavaScript framework as much as a violin. If you flit from one shiny new thing to another, you\u2019re destined to produce amateurish work forever.\nLearn the new thing, but then stick with it long enough to get really good at it \u2013 even if Twitter trolls try to convince you it\u2019s not cool. What\u2019s really not cool is living as a forevernoob. \nIf you\u2019re still not sure what to learn, go back to basics. Considering a new CSS or JavaScript framework? Invest that time in learning the underlying CSS or JavaScript really well instead. Those skills will stand the test of time.\nAudience and purpose\nBack when I was in school, my English teacher (a nice Welsh lady, who I appreciate more now than I did back then) used to love to remind us that every piece of writing should have an audience and a purpose. So much so that audience and purpose almost became her catch phrase. For every essay, article or letter, we were reminded to consider who we were writing it for and what we were trying to achieve. \nIt\u2019s something I think about a lot; certainly when writing, but also in almost every other creative endeavour. Asking who is this for and what am I trying to achieve applies equally to designing a logo or website, through to composing music or writing software.\nBeing productive\nIt seems like everyone wants to have a product these days. As someone who used to do client services work and now has a product company, I often talk with people who are interested in taking something they\u2019ve built in-house and turning it into a product. You know the sort of thing: a design agency with its own CMS or project management web app; the very logical thought process of: if this helps our business, maybe others will find it valuable too; the question that inevitably follows: could we turn this into a product?\nWhether consciously or not, the audience and purpose influence nearly every aspect of your creative process. Once written or designed or developed or created, revising a work to change the audience and purpose can be quite a challenge. No matter how much you want to turn the tension-building, atmospheric music for a horror film into a catchy chart hit, it\u2019s going to be a struggle. Yes, it\u2019s music, but that\u2019s neither the audience nor purpose for which it was created.\nThe same is absolutely true for your in-house tools \u2013 those were also designed for a specific audience and purpose. Your in-house CMS would have been designed with an audience of your own development team, who are busy implementing sites for clients. The purpose is to make that team more productive overall, taking into account considerations of maintaining multiple sites on a common codebase, training clients, a more mature and stable platform and all the other benefits of reusing the same code for each project. The audience is your team and the purpose increased productivity.\nThat\u2019s very different from a customer who wants to buy a polished system to use off-the-shelf. If their needs perfectly aligned with yours then they wouldn\u2019t be in the market for your product \u2013 they would have built their own.\nSometimes you hear the advice to \u201cscratch your own itch\u201d when it comes to product design. I don\u2019t completely agree. Got an itch? Great. Find other itchy people and sell them a backscratcher.\nBuilding a product, like designing a website, is a lot of work. It requires knowing your audience and purpose inside out. You can\u2019t fudge it and you can\u2019t just hope you\u2019ll find an audience for some old thing you have lying around.\nAlways consider the audience and purpose for everything you create. It\u2019s often the difference between success and failure.\nSolve the hard problems\nHuman beings have a natural tendency to avoid hard problems. In digital design (websites, software, whatever) the received wisdom is often that we can get 80% of the way towards doing the hard thing by doing something that\u2019s not very hard.\nDo you know what you get at the end of it? Paid. But nothing really great ever happens that way.\nI worked on a client project a while back where one of the big challenges was making full use of the massive image library they had built up over the years. The client had tens of thousands of photographs, along with a fair amount of video and a large MP3 audio library too. If it wasn\u2019t managed carefully, storage sizes would get out of control, content would go unattributed, and everything would get very messy very quickly.\nI could tell from the outset that this aspect of the project was going to be a constant problem. So we tackled it head-on. We designed and built a media management system to hold and process all the assets, and added an API so the content management system could talk to it. Every time the site needed a photo at a new size, it made an API request to the system and everything was handled seamlessly.\nIt was a daunting job to invest all the time and effort in building that dedicated system and API, but it really paid off. Instead of having the constant troubles of a vast library of media, it became one of the strongest parts of the project.\nTurn your hardest problems into your biggest strengths\nThere\u2019s a funny thing about hard problems. The hardest problems are the most fun to solve and have the biggest impact.\nMaybe you\u2019re the sort of person who clocks in for work, does their job and clocks out at 5pm without another thought. But I don\u2019t think you are, because you\u2019re here reading this. If you really love what you do, I don\u2019t think you can be satisfied in your work unless you\u2019re seeking out and working on those hard problems. That\u2019s where the magic is.\n\nThe new year is a helpful time to think about breaking bad habits. Whether it\u2019s smoking a bit less, or going to the gym a bit more, the ticking over of the calendar can provide the motivation for a new start. I have some suggestions for you.\n\nGet good at what you do. Practise your skills and don\u2019t just flit from one shiny thing to the next.\nRemember who you\u2019re doing it for and why. Consider the audience and purpose for everything you create.\nSolve the hard problems. It\u2019s more interesting, more satisfying, and has a greater impact.\n\nAs we move into 2016, these are the things I\u2019m going to continue to work on. Maybe you\u2019d like to join me.", "year": "2015", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2015-12-24T00:00:00+00:00", "url": "https://24ways.org/2015/solve-the-hard-problems/", "topic": "process"} {"rowid": 67, "title": "What I Learned about Product Design This Year", "contents": "2015 was a humbling year for me. In September of 2014, I joined a tiny but established startup called SproutVideo as their third employee and first designer. The role interests me because it affords the opportunity to see how design can grow a solid product with a loyal user-base into something even better. \nThe work I do now could also have a real impact on the brand and user experience of our product for years to come, which is a thrilling prospect in an industry where much of what I do feels small and temporary. I got in on the ground floor of something special: a small, dedicated, useful company that cares deeply about making video hosting effortless and rewarding for our users.\nI had (and still have) grand ideas for what thoughtful design can do for a product, and the smaller-scale product design work I\u2019ve done or helped manage over the past few years gave me enough eager confidence to dive in head first. Readers who have experience redesigning complex existing products probably have a knowing smirk on their face right now. As I said, it\u2019s been humbling. A year of focused product design, especially on the scale we are trying to achieve with our small team at SproutVideo, has taught me more than any projects in recent memory. I\u2019d like to share a few of those lessons.\nProduct design is very different from marketing design\nThe majority of my recent work leading up to SproutVideo has been in marketing design. These projects are so fun because their aim is to communicate the value of the product in a compelling and memorable way. In order to achieve this goal, I spent a lot of time thinking about content strategy, responsive design, and how to create striking visuals that tell a story. These are all pursuits I love.\nProduct design is a different beast. When designing a homepage, I can employ powerful imagery, wild gradients, and somewhat-quirky fonts. When I began redesigning the SproutVideo product, I wanted to draw on all the beautiful assets I\u2019ve created for our marketing materials, but big gradients, textures, and display fonts made no sense in this new context.\nThat\u2019s because the product isn\u2019t about us, and it isn\u2019t about telling our story. Product design is about getting out of the way so people can do their job. The visual design is there to create a pleasant atmosphere for people to work in, and to help support the user experience. Learning to take \u201cus\u201d out of the equation took some work after years of creating gorgeous imagery and content for the sales-driven side of businesses.\nI\u2019ve learned it\u2019s very valuable to design both sides of the experience, because marketing and product design flex different muscles. If you\u2019re currently in an environment where the two are separate, consider switching teams in 2016. Designing for product when you\u2019ve mostly done marketing, or vice versa, will deepen your knowledge as a designer overall. You\u2019ll face new unexpected challenges, which is the only way to grow.\nProduct design can not start with what looks good on Dribbble\nI have an embarrassing confession: when I began the redesign, I had a secret goal of making something that would look gorgeous in my portfolio. I have a collection of product shots that I admire on Dribbble; examples of beautiful dashboards and widgets and UI elements that look good enough to frame. I wanted people to feel the same way about the final outcome of our redesign. Mistakenly, this was a factor in my initial work. I opened Photoshop and crafted pixel-perfect static buttons and form elements and color palettes that\u200a\u2014\u200awhen applied to our actual product\u200a\u2014\u200alooked like a toddler beauty pageant. It added up to a lot of unusable shininess, noise, and silliness.\nI was disappointed; these elements seemed so lovely in isolation, but in context, they felt tacky and overblown. I realized: I\u2019m not here to design the world\u2019s most beautiful drop down menu. Good design has nothing to do with ego, but in my experience designers are, at least a little bit, secret divas. I\u2019m no exception. I had to remind myself that I am not working in service of a bigger Dribbble following or to create the most Pinterest-ing work. My function is solely to serve the users\u200a\u2014\u200ato make life a little better for the good people who keep my company in business.\nThis meant letting go of pixel-level beauty to create something bigger and harder: a system of elements that work together in harmony in many contexts. The visual style exists to guide the users. When done well, it becomes a language that users understand, so when they encounter a new feature or have a new goal, they already feel comfortable navigating it. This meant stripping back my gorgeous animated menu into something that didn\u2019t detract from important neighboring content, and could easily fit in other parts of the app. In order to know what visual style would support the users, I had to take a wider view of the product as a whole.\nJust accept that designing a great product \u2013 like many worthwhile pursuits \u2013 is initially laborious and messy\nOnce I realized I couldn\u2019t start by creating the most Dribbble-worthy thing, I knew I\u2019d have to begin with the unglamorous, frustrating, but weirdly wonderful work of mapping out how the product\u2019s content could better be structured. Since we\u2019re redesigning an existing product, I assumed this would be fairly straightforward: the functionality was already in place, and my job was just to structure it in a more easily navigable way.\nI started by handing off a few wireframes of the key screens to the developer, and that\u2019s when the questions began rolling in: \u201cIf we move this content into a modal, how will it affect this similar action here?\u201d \u201cWhat happens if they don\u2019t add video tags, but they do add a description?\u201d \u201cWhat if the user has a title that is 500 characters long?\u201d \u201cWhat if they want their video to be private to some users, but accessible to others?\u201d.\nHow annoying (but really, fantastic) that people use our product in so many ways. Turns out, product design isn\u2019t about laying out elements in the most ideal scenario for the user that\u2019s most convenient for you. As product designers, we have to foresee every outcome, and anticipate every potential user need.\nWhich brings me to another annoying epiphany: if you want to do it well, and account for every user, product design is so much more snarly and tangled than you\u2019d expect going in. I began with a simple goal: to improve the experience on just one of our key product pages. However, every small change impacts every part of the product to some degree, and that impact has to be accounted for. Every decision is based on assumptions that have to be tested; I test my assumptions by observing users, talking to the team, wireframing, and prototyping. Many of my assumptions are wrong. There are days when it\u2019s incredibly frustrating, because an elegant solution for users with one goal will complicate life for users with another goal. It\u2019s vital to solve as many scenarios as possible, even though this is slow, sometimes mind-bending work.\nAs a side bonus, wireframing and prototyping every potential state in a product is tedious, but your developers will thank you for it. It\u2019s not their job to solve what happens when there\u2019s an empty state, error, or edge case. Showing you\u2019ve accounted for these scenarios will win a developer\u2019s respect; failing to do so will frustrate them.\nWhen you\u2019ve created and tested a system that supports user needs, it will be beautiful\nRemember what I said in the beginning about wanting to create a Dribbble-worthy product? When I stopped focusing on the visual details of the design (color, spacing, light and shadow, font choices) and focused instead on structuring the content to maximize usability and delight, a beautiful design began to emerge naturally.\nI began with grayscale, flat wireframes as a strategy to keep me from getting pulled into the visual style before the user experience was established. As I created a system of elements that worked in harmony, the visual style choices became obvious. Some buttons would need to be brighter and sit off the page to help the user spot important actions. Some elements would need line separators to create a hierarchy, where others could stand on their own as an emphasized piece of content. As the user experience took shape, the visual style emerged naturally to support it. The result is a product that feels beautiful to use, because I was thoughtful about the experience first.\n\nA big takeaway from this process has been that my assumptions will often be proven wrong. My assumptions about how to design a great product, and how users will interact with that product, have been tested and revised repeatedly. At SproutVideo we\u2019re about to undertake the biggest test of our work; we\u2019re going to launch a small part of the product redesign to our users. If I\u2019ve learned anything, it\u2019s that I will continue to be humbled by the ongoing effort of making the best product I can, which is a wonderful thing.\nNext year, I hope you all get to do work that takes you out of our comfort zone. Be regularly confounded and embarrassed by your wrong assumptions, learn from them, and come back and tell us what you learned in 2016.", "year": "2015", "author": "Meagan Fisher", "author_slug": "meaganfisher", "published": "2015-12-14T00:00:00+00:00", "url": "https://24ways.org/2015/what-i-learned-about-product-design-this-year/", "topic": "design"} {"rowid": 68, "title": "Grid, Flexbox, Box Alignment: Our New System for Layout", "contents": "Three years ago for 24 ways 2012, I wrote an article about a new CSS layout method I was excited about. A specification had emerged, developed by people from the Internet Explorer team, bringing us a proper grid system for the web. In 2015, that Internet Explorer implementation is still the only public implementation of CSS grid layout. However, in 2016 we should be seeing it in a new improved form ready for our use in browsers.\nGrid layout has developed hidden behind a flag in Blink, and in nightly builds of WebKit and, latterly, Firefox. By being developed in this way, breaking changes could be safely made to the specification as no one was relying on the experimental implementations in production work.\nAnother new layout method has emerged over the past few years in a more public and perhaps more painful way. Shipped prefixed in browsers, The flexible box layout module (flexbox) was far too tempting for developers not to use on production sites. Therefore, as changes were made to the specification, we found ourselves with three different flexboxes, and browser implementations that did not match one another in completeness or in the version of specified features they supported. \nOwing to the different ways these modules have come into being, when I present on grid layout it is often the very first time someone has heard of the specification. A question I keep being asked is whether CSS grid layout and flexbox are competing layout systems, as though it might be possible to back the loser in a CSS layout competition. The reality, however, is that these two methods will sit together as one system for doing layout on the web, each method playing to certain strengths and serving particular layout tasks. \nIf there is to be a loser in the battle of the layouts, my hope is that it will be the layout frameworks that tie our design to our markup. They have been a necessary placeholder while we waited for a true web layout system, but I believe that in a few years time we\u2019ll be easily able to date a website to circa 2015 by seeing
or
in the markup.\nIn this article, I\u2019m going to take a look at the common features of our new layout systems, along with a couple of examples which serve to highlight the differences between them.\nTo see the grid layout examples you will need to enable grid in your browser. The easiest thing to do is to enable the experimental web platform features flag in Chrome. Details of current browser support can be found here. \nRelationship\nItems only become flex or grid items if they are a direct child of the element that has display:flex, display:grid or display:inline-grid applied. Those direct children then understand themselves in the context of the complete layout. This makes many things possible. It\u2019s the lack of relationship between elements that makes our existing layout methods difficult to use. If we float two columns, left and right, we have no way to tell the shorter column to extend to the height of the taller one. We have expended a lot of effort trying to figure out the best way to make full-height columns work, using techniques that were never really designed for page layout.\nAt a very simple level, the relationship between elements means that we can easily achieve full-height columns. In flexbox:\nSee the Pen Flexbox equal height columns by rachelandrew (@rachelandrew) on CodePen.\n\nAnd in grid layout (requires a CSS grid-supporting browser):\nSee the Pen Grid equal height columns by rachelandrew (@rachelandrew) on CodePen.\n\nAlignment\nFull-height columns rely on our flex and grid items understanding themselves as part of an overall layout. They also draw on a third new specification: the box alignment module. If vertical centring is a gift you\u2019d like to have under your tree this Christmas, then this is the box you\u2019ll want to unwrap first.\nThe box alignment module takes the alignment and space distribution properties from flexbox and applies them to other layout methods. That includes grid layout, but also other layout methods. Once implemented in browsers, this specification will give us true vertical centring of all the things.\nOur examples above achieved full-height columns because the default value of align-items is stretch. The value ensured our columns stretched to the height of the tallest. If we want to use our new vertical centring abilities on all items, we would set align-items:center on the container. To align one flex or grid item, apply the align-self property.\nThe examples below demonstrate these alignment properties in both grid layout and flexbox. The portrait image of Widget the cat is aligned with the default stretch. The other three images are aligned using different values of align-self.\nTake a look at an example in flexbox:\nSee the Pen Flexbox alignment by rachelandrew (@rachelandrew) on CodePen.\n\nAnd also in grid layout (requires a CSS grid-supporting browser):\nSee the Pen Grid alignment by rachelandrew (@rachelandrew) on CodePen.\n\nThe alignment properties used with CSS grid layout.\nFluid grids\nA cornerstone of responsive design is the concept of fluid grids.\n\n\u201c[\u2026]every aspect of the grid\u2014and the elements laid upon it\u2014can be expressed as a proportion relative to its container.\u201d\n\u2014Ethan Marcotte, \u201cFluid Grids\u201d\n\nThe method outlined by Marcotte is to divide the target width by the context, then use that value as a percentage value for the width property on our element.\nh1 {\n margin-left: 14.575%; /*\u00a0144px / 988px = 0.14575\u00a0*/\n width: 70.85%; /*\u00a0700px / 988px = 0.7085\u00a0*/\n}\nIn more recent years, we\u2019ve been able to use calc() to simplify this (at least, for those of us able to drop support for Internet Explorer 8). However, flexbox and grid layout make fluid grids simple.\nThe most basic of flexbox demos shows this fluidity in action. The justify-content property \u2013 another property defined in the box alignment module \u2013 can be used to create an equal amount of space between or around items. As the available width increases, more space is assigned in proportion.\nIn this demo, the list items are flex items due to display:flex being added to the ul. I have given them a maximum width of 250 pixels. Any remaining space is distributed equally between the items as the justify-content property has a value of space-between.\nSee the Pen Flexbox: justify-content by rachelandrew (@rachelandrew) on CodePen.\n\nFor true fluid grid-like behaviour, your new flexible friends are flex-grow and flex-shrink. These properties give us the ability to assign space in proportion.\nThe flexbox flex property is a shorthand for:\n\nflex-grow\nflex-shrink\nflex-basis\n\nThe flex-basis property sets the default width for an item. If flex-grow is set to 0, then the item will not grow larger than the flex-basis value; if flex-shrink is 0, the item will not shrink smaller than the flex-basis value.\n\nflex: 1 1 200px: a flexible box that can grow and shrink from a 200px basis.\nflex: 0 0 200px: a box that will be 200px and cannot grow or shrink.\nflex: 1 0 200px: a box that can grow bigger than 200px, but not shrink smaller.\n\nIn this example, I have a set of boxes that can all grow and shrink equally from a 100 pixel basis.\nSee the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen.\n\nWhat I would like to happen is for the first element, containing a portrait image, to take up less width than the landscape images, thus keeping it more in proportion. I can do this by changing the flex-grow value. By giving all the items a value of 1, they all gain an equal amount of the available space after the 100 pixel basis has been worked out.\nIf I give them all a value of 3 and the first box a value of 1, the other boxes will be assigned three parts of the available space while box 1 is assigned only one part. You can see what happens in this demo:\nSee the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen.\n\nOnce you understand flex-grow, you should easily be able to grasp how the new fraction unit (fr, defined in the CSS grid layout specification) works. Like flex-grow, this unit allows us to assign available space in proportion. In this case, we assign the space when defining our track sizes.\nIn this demo (which requires a CSS grid-supporting browser), I create a four-column grid using the fraction unit to define my track sizes. The first track is 1fr in width, and the others 2fr.\ngrid-template-columns: 1fr 2fr 2fr 2fr;\nSee the Pen Grid fraction units by rachelandrew (@rachelandrew) on CodePen.\n\nThe four-track grid.\nSeparation of concerns\nMy younger self petitioned my peers to stop using tables for layout and to move to CSS. One of the rallying cries of that movement was the concept of separating our source and content from how they were displayed. It was something of a failed promise given the tools we had available: the display leaked into the markup with the need for redundant elements to cope with browser bugs, or visual techniques that just could not be achieved without supporting markup.\nBrowsers have improved, but even now we can find ourselves compromising the ideal document structure so we can get the layout we want at various breakpoints. In some ways, the situation has returned to tables-for-layout days. Many of the current grid frameworks rely on describing our layout directly in the markup. We add divs for rows, and classes to describe the number of desired columns. We nest these constructions of divs inside one another.\nHere is a snippet from the Bootstrap grid examples \u2013 two columns with two nested columns:\n
\n
\n .col-md-8\n
\n
\n .col-md-6\n
\n
\n .col-md-6\n
\n
\n
\n
\n .col-md-4\n
\n
\nNot a million miles away from something I might have written in 1999.\n\n \n \n \n \n
\n .col-md-8\n \n \n \n \n \n
\n .col-md-6\n \n .col-md-6\n
\n
\n .col-md-4\n
\nGrid and flexbox layouts do not need to be described in markup. The layout description happens entirely in the CSS, meaning that elements can be moved around from within the presentation layer.\nFlexbox gives us the ability to reverse the flow of elements, but also to set the order of elements with the order property. This is demonstrated here, where Widget the cat is in position 1 in the source, but I have used the order property to display him after the things that are currently unimpressive to him.\nSee the Pen Flexbox: order by rachelandrew (@rachelandrew) on CodePen.\n\nGrid layout takes this a step further. Where flexbox lets us set the order of items in a single dimension, grid layout gives us the ability to position things in two dimensions: both rows and columns. Defined in the CSS, this positioning can be changed at any breakpoint without needing additional markup. Compare the source order with the display order in this example (requires a CSS grid-supporting browser):\nSee the Pen Grid positioning in two dimensions by rachelandrew (@rachelandrew) on CodePen.\n\nLaying out our items in two dimensions using grid layout.\nAs these demos show, a straightforward way to decide if you should use grid layout or flexbox is whether you want to position items in one dimension or two. If two, you want grid layout.\nA note on accessibility and reordering\nThe issues arising from this powerful ability to change the way items are ordered visually from how they appear in the source have been the subject of much discussion. The current flexbox editor\u2019s draft states\n\n\u201cAuthors must use order only for visual, not logical, reordering of content. Style sheets that use order to perform logical reordering are non-conforming.\u201d\n\u2014CSS Flexible Box Layout Module Level 1, Editor\u2019s Draft (3 December 2015)\n\nThis is to ensure that non-visual user agents (a screen reader, for example) can rely on the document source order as being correct. Take care when reordering that you do so from the basis of a sound document that makes sense in terms of source order. Avoid using visual order to convey meaning.\nAutomatic content placement with rules\nHaving control over the order of items, or placing items on a predefined grid, is nice. However, we can often do that already with one method or another and we have frameworks and tools to help us. Tools such as Susy mean we can even get away from stuffing our markup full of grid classes. However, our new layout methods give us some interesting new possibilities.\nSomething that is useful to be able to do when dealing with content coming out of a CMS or being pulled from some other source, is to define a bunch of rules and then say, \u201cDisplay this content, using these rules.\u201d\nAs an example of this, I will leave you with a Christmas poem displayed in a document alongside Widget the cat and some of the decorations that are bringing him no Christmas cheer whatsoever.\nThe poem is displayed first in the source as a set of paragraphs. I\u2019ve added a class identifying each of the four paragraphs but they are displayed in the source as one text. Below that are all my images, some landscape and some portrait; I\u2019ve added a class of landscape to the landscape ones.\nThe mobile-first grid is a single column and I use line-based placement to explicitly position my poem paragraphs. The grid layout auto-placement rules then take over and place the images into the empty cells left in the grid.\nAt wider screen widths, I declare a four-track grid, and position my poem around the grid, keeping it in a readable order.\nI also add rules to my landscape class, stating that these items should span two tracks. Once again the grid layout auto-placement rules position the rest of my images without my needing to position them. You will see that grid layout takes items out of source order to fill gaps in the grid. It does this because I have set the property grid-auto-flow to dense. The default is sparse meaning that grid will not attempt this backfilling behaviour.\nTake a look and play around with the full demo (requires a CSS grid layout-supporting browser):\nSee the Pen Grid auto-flow with rules by rachelandrew (@rachelandrew) on CodePen.\n\nThe final automatic placement example.\nMy wish for 2016\nI really hope that in 2016, we will see CSS grid layout finally emerge from behind browser flags, so that we can start to use these features in production \u2014 that we can start to move away from using the wrong tools for the job.\nHowever, I also hope that we\u2019ll see developers fully embracing these tools as the new system that they are. I want to see people exploring the possibilities they give us, rather than trying to get them to behave like the grid systems of 2015. As you discover these new modules, treat them as the new paradigm that they are, get creative with them. And, as you find the edges of possibility with them, take that feedback to the CSS Working Group. Help improve the layout systems that will shape the look of the future web.\nSome further reading\n\nI maintain a site of grid layout examples and resources at Grid by Example.\nThe three CSS specifications I\u2019ve discussed can be found as editor\u2019s drafts: CSS grid, flexbox, box alignment.\nI wrote about the last three years of my interest in CSS grid layout, which gives something of a history of the specification.\nMore examples of box alignment and grid layout.\nMy presentation at Fronteers earlier this year, in which I explain more about these concepts.", "year": "2015", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2015-12-15T00:00:00+00:00", "url": "https://24ways.org/2015/grid-flexbox-box-alignment-our-new-system-for-layout/", "topic": "code"} {"rowid": 69, "title": "How to Do a UX Review", "contents": "A UX review is where an expert goes through a website looking for usability and experience problems and makes recommendations on how to fix them. \nI\u2019ve completed a number of UX reviews over my twelve years working as a user experience consultant and I thought I\u2019d share my approach. \nI\u2019ll be talking about reviewing websites here; you can adapt the approach for web apps, or mobile or desktop apps. \nWhy conduct a review\nTypically, a client asks for a review to be undertaken by a trusted and, ideally, detached third party who either works for an agency or is a freelancer. Often they may ask a new member of the UX team to complete one, or even set it as a task for a job interview. This indicates the client is looking for an objective view, seen from the outside as a user would see the website. \nI always suggest conducting some user research rather than a review. Users know their goals and watching them make (what you might think of as) mistakes on the website is invaluable. Conducting research with six users can give you six hours\u2019 worth of review material from six viewpoints. In short, user research can identify more problems and show how common those problems might be. \nThere are three reasons, though, why a review might better suit client needs than user research: \n\nQuick results: user research and analysis takes at least three weeks.\nLimited budget: the \u00a36\u201310,000 cost to run user research is about twice the cost of a UX review. \nUsers are hard to reach: in the business-to-business world, reaching users is difficult, especially if your users hold senior positions in their organisations. Working with consumers is much easier as there are often more of them. \n\nThere is some debate about the benefits of user research over UX review. In my experience you learn far more from research, but opinions differ. \nBe objective\nThe number one mistake many UX reviewers make is reporting back the issues they identify as their opinion. This can cause credibility problems because you have to keep justifying why your opinion is correct. \nI\u2019ve had the most success when giving bad news in a UX review and then finally getting things fixed when I have been as objective as possible, offering evidence for why something may be a problem. \nTo be objective we need two sources of data: numbers from analytics to appeal to reason; and stories from users in the form of personas to speak to emotions. Highlighting issues with dispassionate numerical data helps show the extent of the problem. Making the problems more human using personas can make the problem feel more real. \nNumbers from analytics\nThe majority of clients I work with use Google Analytics, but if you use a different analytics package the same concepts apply. I use analytics to find two sets of things.\n1. Landing pages and search terms\nLanding pages are the pages users see first when they visit a website \u2013 more often than not via a Google search. Landing pages reveal user goals. If a user landed on a page called \u2018Yellow shoes\u2019 their goal may well be to find out about or buy some yellow shoes. \nIt would be great to see all the search terms bringing people to the website but in 2011 Google stopped providing search term data to (rightly!) protect users\u2019 privacy. You can get some search term data from Google Webmaster tools, but we must rely on landing pages as a clue to our users\u2019 goals. \nThe thing to look for is high-traffic landing pages with a high bounce rate. Bounce rate is the percentage of visitors to a website who navigate away from the site after viewing only one page. A high bounce rate (over 50%) isn\u2019t good; above 70% is bad.\nTo get a list of high-traffic landing pages with a high bounce rate install this bespoke report.\nGoogle Analytics showing landing pages ordered by popularity and the bounce rate for each.\nThis is the list of pages with high demand and that have real problems as the bounce rate is high. This is the main focus of the UX review. \n2. User flows\nWe have the beginnings of the user journey: search terms and initial landing pages. Now we can tap into the really useful bit of Google Analytics. Called behaviour flows, they show the most common order of pages visited. \nBehaviour flows from Google Analytics, showing the routes users took through the website.\nHere we can see the second and third (and so on) pages users visited. Importantly, we can also see the drop-outs at each step. \nIf your client has it set up, you can also set goal pages (for example, a post-checkout contact us and thank you page). You can then see a similar view that tracks back from the goal pages. If your client doesn\u2019t have this, suggest they set up goal tracking. It\u2019s easy to do. \nWe now have the remainder of the user journey. \nA user journey\nExpect the work in analytics to take up to a day. \nWe may well identify more than one user journey, starting from different landing pages and going to different second- and third-level pages. That\u2019s a good thing and shows we have different user types. Talking of user types, we need to define who our users are. \nPersonas\nWe have some user journeys and now we need to understand more about our users\u2019 motivations and goals. \nI have a love-hate relationship with personas, but used properly these portraits of users can help bring a human touch to our UX review. \nI suggest using a very cut-down view of a persona. My old friends Steve Cable and Richard Caddick at cxpartners have a great free template for personas from their book Communicating the User Experience.\nThe first thing to do is find a picture that represents that persona. Don\u2019t use crappy stock photography \u2013 it\u2019s sometimes hard to relate to perfect-looking people) \u2013 use authentic-looking people. Here\u2019s a good collection of persona photos. \nAn example persona.\nThe personas have three basic attributes:\n\nGoals: we can complete these drawing on the analytics data we have (see example).\nMusts: things we have to do to meet the persona\u2019s needs.\nMust nots: a list of things we really shouldn\u2019t do. \n\nCompleting points 2 and 3 can often be done during the writing of the report. \nLet\u2019s take an example. We know that the search term \u2018yellow shoes\u2019 takes the user to the landing page for yellow shoes. We also know this page has a high bounce rate, meaning it doesn\u2019t provide a good experience. \nWith our expert hat on we can review the page. We will find two types of problem: \n\nUsability issues: ineffective button placement or incorrect wording, links not looking like links, and so on. \nExperience issues: for example, if a product is out of stock we have to contact the business to ask them to restock. \n\nThat link is very small and hard to see.\nWe could identify that the contact button isn\u2019t easy to find (a usability issue) but that\u2019s not the real problem here. That the user has to ask the business to restock the item is a bad user experience. We add this to our personas\u2019 must nots. The big experience problems with the site form the musts and must nots for our personas. \nWe now have a story around our user journey that highlights what is going wrong. \nIf we\u2019ve identified a number of user journeys, multiple landing pages and differing second and third pages visited, we can create more personas to match. A good rule of thumb is no more than three personas. Any more and they lose impact, watering down your results. \nExpect persona creation to take up to a day to complete. \nLet\u2019s start the review\nWe take the user journeys and we follow them step by step, working through the website looking for the reasons why users drop out at each step. Using Keynote or PowerPoint, I structure the final report around the user journey with separate sections for each step.\nFor each step we\u2019ll find both usability and experience problems. Split the results into those two groups. \nUsability problems are fairly easy to fix as they\u2019re often quick design changes. As you go along, note the usability problems in one place: we\u2019ll call this \u2018quick wins\u2019. Simple quick fixes are a reassuring thing for a client to see and mean they can get started on stuff right away. You can mark the severity of usability issues. Use a scale from 1 to 3 (if you use 1 to 5 everything ends up being a 3!) where 1 is minor and 3 is serious. \nReview the website on the device you\u2019d expect your persona to use. Are they using the site on a smartphone? Review it on a smartphone. \nI allow one page or slide per problem, which allows me to explain what is going wrong. For a usability problem I\u2019ll often make a quick wireframe or sketch to explain how to address it. \nA UX review slide displaying all the elements to be addressed. These slides may be viewed from across the room on a screen so zoom into areas of discussion.\n(Quick tip: if you use Google Chrome, try Awesome Screenshot to capture screens.)\nWhen it comes to the more severe experience problems \u2013 things like an online shop not offering next day delivery, or a business that needs to promise to get back to new customers within a few hours \u2013 these will take more than a tweak to the UI to fix. \nCall these something different. I use the terms like business challenges and customer experience issues as they show that it will take changes to the organisation and its processes to address them. It\u2019s often beyond the remit of a humble UX consultant to recommend how to fix organisational issues, so don\u2019t try. \nAgain, create a page within your document to collect all of the business challenges together. \nExpect the review to take between one and three days to complete. \nThe final report should follow this structure:\n\nThe approach\nOverview of usability quick wins\nOverview of experience issues\nOverview of Google Analytics findings\nThe user journeys \nThe personas\nDetailed page-by-page review (broken down by steps on the user journey)\n\nThere are two academic theories to help with the review. \nHeuristic evaluation is a set of criteria to organise the issues you find. They\u2019re great for categorising the usability issues you identify but in practice they can be quite cumbersome to apply. \nI prefer the more scientific and much simpler cognitive walkthrough that is focused on goals and actions.\nA workshop to go through the findings\nThe most important part of the UX review process is to talk through the issues with your client and their team. \nA document can only communicate a certain amount. Conversations about the findings will help the team understand the severity of the issues you\u2019ve uncovered and give them a chance to discuss what to do about them. \nExpect the workshop to last around three hours.\nWhen presenting the report, explain the method you used to conduct the review, the data sources, personas and the reasoning behind the issues you found. Start by going through the usability issues. Often these won\u2019t be contentious and you can build trust and improve your credibility by making simple, easy to implement changes. \nThe most valuable part of the workshop is conversation around each issue, especially the experience problems. The workshop should include time to talk through each experience issue and how the team will address it. \nI collect actions on index cards throughout the workshop and make a note of who will take what action with each problem. \nIndex cards showing the problem and who is responsible.\nWhen talking through the issues, the person who designed the site is probably in the room \u2013 they may well feel threatened. So be nice.\nWhen I talk through the report I try to have strong ideas, weakly held.\nAt the end of the workshop you\u2019ll have talked through each of the issues and identified who is responsible for addressing them. To close the workshop I hand out the cards to the relevant people, giving them a physical reminder of the next steps they have to take. \nThat\u2019s my process for conducting a review. I\u2019d love to hear any tips you have in the comments.", "year": "2015", "author": "Joe Leech", "author_slug": "joeleech", "published": "2015-12-03T00:00:00+00:00", "url": "https://24ways.org/2015/how-to-do-a-ux-review/", "topic": "ux"} {"rowid": 70, "title": "Bringing Your Code to the Streets", "contents": "\u2014 or How to Be a Street VJ\nOur amazing world of web code is escaping out of the browser at an alarming rate and appearing in every aspect of the environment around us. Over the past few years we\u2019ve already seen JavaScript used server-side, hardware coded with JavaScript, a rise of native style and desktop apps created with HTML, CSS and JavaScript, and even virtual reality (VR) is getting its fair share of front-end goodness.\nYou can go ahead and play with JavaScript-powered hardware such as the Tessel or the Espruino to name a couple. Just check out the Tessel project page to see JavaScript in the world of coffee roasting or sleep tracking your pet. With the rise of the internet of things, JavaScript can be seen collecting information on flooding among other things. And if that\u2019s not enough \u2018outside the browser\u2019 implementations, Node.js servers can even be found in aircraft!\nI previously mentioned VR and with three.js\u2019s extra StereoEffect.js module it\u2019s relatively simple to get browser 3D goodness to be Google Cardboard-ready, and thus set the stage for all things JavaScript and VR. It\u2019s been pretty popular in the art world too, with interactive works such as Seb Lee-Delisle\u2019s Lunar Trails installation, featuring the old arcade game Lunar Lander, which you can now play in your browser while others watch (it is the web after all). The Science Museum in London held Chrome Web Lab, an interactive exhibition featuring five experiments, showcasing the magic of the web. And it\u2019s not even the connectivity of the web that\u2019s being showcased; we can even take things offline and use web code for amazing things, such as fighting Ebola.\nOne thing is for sure, JavaScript is awesome. Hell, if you believe those telly programs (as we all do), JavaScript can even take down the stock market, purely through the witchcraft of canvas! Go JavaScript!\nNow it\u2019s our turn\nSo I wanted to create a little project influenced by this theme, and as it\u2019s Christmas, take it to the streets for a little bit of party fun! Something that could take code anywhere. Here\u2019s how I made a portable visual projection pack, a piece of video mixing software and created some web-coded street art.\nStep one: The equipment\nYou will need:\n\nOne laptop: with HDMI output and a modern browser installed, such as Google Chrome.\nOne battery-powered mini projector: I\u2019ve used a Texas Instruments DLP; for its 120 lumens it was the best cost-to-lumens ratio I could find.\nOne MIDI controller (optional): mine is an ICON iDJ as it suits mixing visuals. However, there is more affordable hardware on the market such as an Akai LPD8 or a Korg nanoPAD2. As you\u2019ll see in the article, this is optional as it can be emulated within the software.\nA case to carry it all around in.\n\n\nStep two: The software\nThe projected visuals, I imagined, could be anything you can create within a browser, whether that be simple HTML and CSS, images, videos, SVG or canvas. The only requirement I have is that they move or change with sound and that I can mix any one visual into another.\nYou may remember a couple of years ago I created a demo on this very site, allowing audio-triggered visuals from the ambient sounds your device mic was picking up. That was a great starting point \u2013 I used that exact method to pick up the audio and thus the first requirement was complete. If you want to see some more examples of visuals I\u2019ve put together for this, there\u2019s a showcase on CodePen.\nThe second requirement took a little more thought. I needed two screens, which could at any point show any of the visuals I had coded, but could be mixed from one into the other and back again. So let\u2019s start with two divs, both absolutely positioned so they\u2019re on top of each other, but at the start the second screen\u2019s opacity is set to zero.\nNow all we need is a slider, which when moved from one side to the other slowly sets the second screen\u2019s opacity to 1, thereby fading it in.\nSee the Pen Mixing Screens (Software Version) by Rumyra (@Rumyra) on CodePen.\nMixing Screens (CodePen)\n\nAs you saw above, I have a MIDI controller and although the software method works great, I\u2019d quite like to make use of this nifty piece of kit. That\u2019s easily done with the Web MIDI API. All I need to do is call it, and when I move one of the sliders on the controller (I\u2019ve allocated the big cross fader in the middle for this), pick up on the change of value and use that to control the opacity instead.\nvar midi, data;\n// start talking to MIDI controller\nif (navigator.requestMIDIAccess) {\n navigator.requestMIDIAccess({\n sysex: false\n }).then(onMIDISuccess, onMIDIFailure);\n} else {\n alert(\u201cNo MIDI support in your browser.\u201d);\n}\n\n// on success\nfunction onMIDISuccess(midiData) {\n // this is all our MIDI data\n midi = midiData;\n\n var allInputs = midi.allInputs.values();\n // loop over all available inputs and listen for any MIDI input\n for (var input = allInputs.next(); input && !input.done; input = allInputs.next()) {\n // when a MIDI value is received call the onMIDIMessage function\n input.value.onmidimessage = onMIDIMessage;\n }\n}\n\nfunction onMIDIMessage(message) {\n // data comes in the form [command/channel, note, velocity]\n data = message.data;\n\n // Opacity change for screen. The cross fader values are [176, 8, {0-127}]\n if ( (data[0] === 176) && (data[1] === 8) ) {\n // this value will change as the fader is moved\n var opacity = data[2]/127;\n screenTwo.style.opacity = opacity;\n }\n}\n\nThe final code was slightly more complicated than this, as I decided to switch the two screens based on the frequencies of the sound that was playing, and use the cross fader to depict the frequency threshold value. This meant they flickered in and out of each other, rather than just faded. There\u2019s a very rough-and-ready first version of the software on GitHub.\nPhew, Great! Now we need to get all this to the streets!\nStep three: Portable kit\nDid you notice how I mentioned a case to carry it all around in? I wanted the case to be morphable, so I could use the equipment from it too, a sort of bag-to-usherette-tray-type affair. Well, I had an unused laptop bag\u2026\n\nI strengthened it with some MDF, so when I opened the bag it would hold like a tray where the laptop and MIDI controller would sit. The projector was Velcroed to the external pocket of the bag, so when it was a tray it would project from underneath. I added two durable straps, one for my shoulders and one round my waist, both attached to the bag itself. There was a lot of cutting and trimming. As it was a laptop bag it was pretty thick to start and sewing was tricky. However, I only broke one sewing machine needle; I\u2019ve been known to break more working with leather, so I figured I was doing well. By the way, you can actually buy usherette trays, but I just couldn\u2019t resist hacking my own :)\nStep four: Take to the streets\nFirst, make sure everything is charged \u2013 everything \u2013 a lot! The laptop has to power both the MIDI controller and the projector, and although I have a mobile phone battery booster pack, that\u2019ll only charge the projector should it run out. I estimated I could get a good hour of visual artistry before I needed to worry, though.\nI had a couple of ideas about time of day and location. Here in the UK at this time of year, it gets dark around half past four, so I could easily head out in a city around 5pm and it would be dark enough for the projections to be seen pretty well. I chose Bristol, around the waterfront, as there were some interesting locations to try it out in. The best was Millennium Square: busy but not crowded and plenty of surfaces to try projecting on to.\nMy first time out with the portable audio/visual pack (PAVP as it will now be named) was brilliant. I played music and projected visuals, like a one-woman band of A/V!\n\n\nYou might be thinking what the point of this was, besides, of course, it being a bit of fun. Well, this project got me to look at canvas and SVG more closely. The Web MIDI API was really interesting; MIDI as a data format has some great practical uses. I think without our side projects we may not have all these wonderful uses for our everyday code. Not only do they remind us coding can, and should, be fun, they also help us learn and grow as makers.\nMy favourite part? When I was projecting into a water feature in Millennium Square. For those who are familiar, you\u2019ll know it\u2019s like a wall of water so it produced a superb effect. I drew quite a crowd and a kid came to stand next to me and all I could hear him say with enthusiasm was, \u2018Oh wow! That\u2019s so cool!\u2019\nYes\u2026 yes, kid, it was cool. Making things with code is cool.\nMassive thanks to the lovely Drew McLellan for his incredibly well-directed photography, and also Simon Johnson who took a great hand in perfecting the kit while it was attached.", "year": "2015", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2015-12-06T00:00:00+00:00", "url": "https://24ways.org/2015/bringing-your-code-to-the-streets/", "topic": "code"} {"rowid": 71, "title": "Upping Your Web Security Game", "contents": "When I started working in web security fifteen years ago, web development looked very different. The few non-static web applications were built using a waterfall process and shipped quarterly at best, making it possible to add security audits before every release; applications were deployed exclusively on in-house servers, allowing Info Sec to inspect their configuration and setup; and the few third-party components used came from a small set of well-known and trusted providers. And yet, even with these favourable conditions, security teams were quickly overwhelmed and called for developers to build security in.\nIf the web security game was hard to win before, it\u2019s doomed to fail now. In today\u2019s web development, every other page is an application, accepting inputs and private data from users; software is built continuously, designed to eliminate manual gates, including security gates; infrastructure is code, with servers spawned with little effort and even less security scrutiny; and most of the code in a typical application is third-party code, pulled in through open source repositories with rarely a glance at who provided them.\nSecurity teams, when they exist at all, cannot solve this problem. They are vastly outnumbered by developers, and cannot keep up with the application\u2019s pace of change. For us to have a shot at making the web secure, we must bring security into the core. We need to give it no less attention than that we give browser compatibility, mobile design or web page load times. More broadly, we should see security as an aspect of quality, expecting both ourselves and our peers to address it, and taking pride when we do it well.\nWhere To Start?\nEmbracing security isn\u2019t something you do overnight.\nA good place to start is by reviewing things you\u2019re already doing \u2013 and trying to make them more secure. Here are three concrete steps you can take to get going.\nHTTPS\nThreats begin when your system interacts with the outside world, which often means HTTP. As is, HTTP is painfully insecure, allowing attackers to easily steal and manipulate data going to or from the server. HTTPS adds a layer of crypto that ensures the parties know who they\u2019re talking to, and that the information exchanged can be neither modified nor sniffed.\nHTTPS is relevant to any site. If your non-HTTPS site holds opinions, reading it may get your users in trouble with employers or governments. If your users believe what you say, attackers can modify your non-HTTPS to take advantage of and abuse that trust. If you want to use new browser technologies like HTTP2 and service workers, your site will need to be HTTPS. And if you want to be discovered on the web, using HTTPS can help your Google ranking. For more details on why I think you should make the switch to HTTPS, check out this post, these slides and this video.\nUsing HTTPS is becoming easier and cheaper. Here are a few free tools that can help:\n\nGet free and easy HTTPS delivery from Cloudflare (be sure to use \u201cFull SSL\u201d!)\nGet a free and automation-friendly certificate from Let\u2019s Encrypt (now in open beta).\nTest how well your HTTPS is set up using SSLTest.\n\nOther vendors and platforms are rapidly simplifying and reducing the cost of their HTTPS offering, as demand and importance grows.\nTwo-Factor Authentication\nThe most sensitive data is usually stored behind a login, and the authentication process is the primary gate in front of this data. Making this process secure has many aspects, including using HTTPS when accepting credentials, having a strong password policy, never storing the password, and more.\nAll of these are important, but the best single step to boost your authentication security is to introduce two-factor authentication (2FA). Adding 2FA usually means prompting users for an additional one-time code when logging in, which they get via SMS or a mobile app (e.g. Google Authenticator). This code is short-lived and is extremely hard for a remote attacker to guess, thus vastly reducing the risk a leaked or easily guessed password presents.\nThe typical algorithm for 2FA is based on an IETF standard called the time-based one-time password (TOTP) algorithm, and it isn\u2019t that hard to implement. Joel Franusic wrote a great post on implementing 2FA; modules like speakeasy make it even easier; and you can swap SMS with Google Authenticator or your own app if you prefer. If you don\u2019t want to build 2FA support yourself, you can purchase two/multi-factor authentication services from vendors such as DuoSecurity, Auth0, Clef, Hypr and others.\nIf implementing 2FA still feels like too much work, you can also choose to offload your entire authentication process to an OAuth-based federated login. Many companies offer this today, including Facebook, Google, Twitter, GitHub and others. These bigger players tend to do authentication well and support 2FA, but you should consider what data you\u2019re sharing with them in the process.\nTracking Known Vulnerabilities\nMost of the code in a modern application was actually written by third parties, and pulled into your app as frameworks, modules and libraries. While using these components makes us much more productive, along with their functionality we also adopt their security flaws. To make things worse, some of these flaws are well-known vulnerabilities, making it easy for hackers to take advantage of them in an attack.\nThis is a real problem and happens on pretty much every platform. Do you develop in Java? In 2014, over 6% of Java modules downloaded from Maven had a known severe security issue, the typical Java applications containing 24 flaws. Are you coding in Node.js? Roughly 14% of npm packages carry a known vulnerability, and over 60% of dev shops find vulnerabilities in their code. 30% of Docker Hub containers include a high priority known security hole, and 60% of the top 100,000 websites use client-side libraries with known security gaps.\nTo find known security issues, take stock of your dependencies and match them against language-specific lists such as Snyk\u2019s vulnerability DB for Node.js, rubysec for Ruby, victims-db for Python and OWASP\u2019s Dependency Check for Java. Once found, you can fix most issues by upgrading the component in question, though that may be tricky for indirect dependencies. \nThis process is still way too painful, which means most teams don\u2019t do it. The Snyk team and I are hoping to change that by making it as easy as possible to find, fix and monitor known vulnerabilities in your dependencies. Snyk\u2019s wizard will help you find and fix these issues through guided upgrades and patches, and adding Snyk\u2019s test to your continuous integration and deployment (CI/CD) will help you stay secure as your code evolves.\nNote that newly disclosed vulnerabilities usually impact old code \u2013 the one you\u2019re running in production. This means you have to stay alert when new vulnerabilities are disclosed, so you can fix them before attackers can exploit them. You can do so by subscribing to vulnerability lists like US-CERT, OSVDB and NVD. Snyk\u2019s monitor will proactively let you know about new disclosures relevant to your code, but only for Node.js for now \u2013 you can register to get updated when we expand.\nSecuring Yourself\nIn addition to making your application secure, you should make the contributors to that application secure \u2013 including you. Earlier this year we\u2019ve seen attackers target mobile app developers with a malicious Xcode. The real target, however, wasn\u2019t these developers, but rather the users of the apps they create. That you create. Securing your own work environment is a key part of keeping your apps secure, and your users from being compromised.\nThere\u2019s no single step that will make you fully secure, but here are a few steps that can make a big impact:\n\nUse 2FA on all the services related to the application, notably source control (e.g. GitHub), cloud platform (e.g. AWS), CI/CD, CDN, DNS provider and domain registrar. If an attacker compromises any one of those, they could modify or replace your entire application. I\u2019d recommend using 2FA on all your personal services too.\nUse a password manager (e.g. 1Password, LastPass) to ensure you have a separate and complex password for each service. Some of these services will get hacked, and passwords will leak. When that happens, don\u2019t let the attackers access your other systems too.\nSecure your workstation. Be careful what you download, lock your screen when you walk away, change default passwords on services you install, run antivirus software, etc. Malware on your machine can translate to malware in your applications.\nBe very wary of phishing. Smart attackers use \u2018spear phishing\u2019 techniques to gain access to specific systems, and can trick even security savvy users. There are even phishing scams targeting users with 2FA. Be alert to phishy emails.\nDon\u2019t install things through curl | sudo bash, especially if the URL is on GitHub, meaning someone else controls it. Don\u2019t do it on your machines, and definitely don\u2019t do it in your CI/CD systems. Seriously.\n\nStaying secure should be important to you personally, but it\u2019s doubly important when you have privileged access to an application. Such access makes you a way to reach many more users, and therefore a more compelling target for bad actors.\nA Culture of Security\nUsing HTTPS, enabling two-factor authentication and fixing known vulnerabilities are significant steps in building security at your core. As you implement them, remember that these are just a few steps in a longer journey.\nThe end goal is to embrace security as an aspect of quality, and accept we all share the responsibility of keeping ourselves \u2013 and our users \u2013 safe.", "year": "2015", "author": "Guy Podjarny", "author_slug": "guypodjarny", "published": "2015-12-11T00:00:00+00:00", "url": "https://24ways.org/2015/upping-your-web-security-game/", "topic": "code"} {"rowid": 72, "title": "Designing with Contrast", "contents": "When an appetite for aesthetics over usability becomes the bellwether of user interface design, it\u2019s time to reconsider who we\u2019re designing for.\nOver the last few years, we have questioned the signifiers that gave obvious meaning to the function of interface elements. Strong textures, deep shadows, gradients \u2014 imitations of physical objects \u2014 were discarded. And many, rightfully so. Our audiences are now more comfortable with an experience that feels native to the technology, so we should respond in kind.\nYet not all of the changes have benefitted users. Our efforts to simplify brought with them a trend of ultra-minimalism where aesthetics have taken priority over legibility, accessibility and discoverability. The trend shows no sign of losing popularity \u2014 and it is harming our experience of digital content.\n\nA thin veneer\nWe are in a race to create the most subdued, understated interface. Visual contrast is out. In its place: the thinnest weights of a typeface and white text on bright color backgrounds. Headlines, text, borders, backgrounds, icons, form controls and inputs: all grey.\nWhile we can look back over the last decade and see minimalist trends emerging on the web, I think we can place a fair share of the responsibility for the recent shift in priorities on Apple. The release of iOS 7 ushered in a radical change to its user interface. It paired mobile interaction design to the simplicity and eloquence of Apple\u2019s marketing and product design. It was a catalyst. We took what we saw, copied and consumed the aesthetics like pick-and-mix.\nNew technology compounds this trend. Computer monitors and mobile devices are available with screens of unprecedented resolutions. Ultra-light type and subtle hues, difficult to view on older screens, are more legible on these devices. It would be disingenuous to say that designers have always worked on machines representative of their audience\u2019s circumstances, but the gap has never been as large as it is now. We are running the risk of designing VIP lounges where the cost of entry is a Mac with a Retina display.\nMinimalist expectations\nLike progressive enhancement in an age of JavaScript, many good and sensible accessibility practices are being overlooked or ignored. We\u2019re driving unilateral design decisions that threaten accessibility. We\u2019ve approached every problem with the same solution, grasping on to the integrity of beauty, focusing on expression over users\u2019 needs and content. \nSomeone once suggested to me that a client\u2019s website should include two states. The first state would be the ideal experience, with low color contrast, light font weights and no differentiation between links and text. It would be the default. The second state would be presented in whatever way was necessary to meet accessibility standards. Users would have to opt out of the default state via a toggle if it wasn\u2019t meeting their needs. A sort of first-class, upper deck cabin equivalent of graceful degradation. That this would divide the user base was irrelevant, as the aesthetics of the brand were absolute. \nIt may seem like an unusual anecdote, but it isn\u2019t uncommon to see this thinking in our industry. Again and again, we place the burden of responsibility to participate in a usable experience on others. We view accessibility and good design as mutually exclusive. Taking for granted what users will tolerate is usually the forte of monopolistic services, but increasingly we apply the same arrogance to our new products and services.\n\nImitation without representation\nAll of us are influenced in one way or another by one another\u2019s work. We are consciously and unconsciously affected by the visual and audible activity around us. This is important and unavoidable. We do not produce work in a vacuum. We respond to technology and culture. We channel language and geography. We absorb the sights and sounds of film, television, news. To mimic and copy is part and parcel of creating something an audience of many can comprehend and respond to. Our clients often look first to their competitors\u2019 products to understand their success.\nHowever, problems arise when we focus on style without context; form without function; mimicry as method. Copied and reused without any of the ethos of the original, stripped of deliberate and informed decision-making, the so-called look and feel becomes nothing more than paint on an empty facade.\nThe typographic and color choices so in vogue today with our popular digital products and services have little in common with the brands they are meant to represent.\n\nFor want of good design, the message was lost\nThe question to ask is: does the interface truly reflect the product? Is it an accurate characterization of the brand and organizational values? Does the delivery of the content match the tone of voice?\nThe answer is: probably not. Because every organization, every app or service, is unique. Each with its own personality, its own values and wonderful quirks. Design is communication. We should do everything in our role as professionals to use design to give voice to the message. Our job is to clearly communicate the benefits of a service and unreservedly allow access to information and content. To do otherwise, by obscuring with fashionable styles and elusive information architecture, does a great disservice to the people who chose to engage with and trust our products.\nWe can achieve hierarchy and visual rhythm without resorting to extreme reduction. We can craft a beautiful experience with fine detail and curiosity while meeting fundamental standards of accessibility (and strive to meet many more).\nStandards of excellence\nIt isn\u2019t always comfortable to step back and objectively question our design choices. We get lost in the flow of our work, using patterns and preferences we\u2019ve tried and tested before. That our decisions often seem like second nature is a gift of experience, but sometimes it prevents us from finding our blind spots.\nI was first caught out by my own biases a few years ago, when designing an interface for the Bank of England. After deciding on the colors for the typography and interactive elements, I learned that the site had to meet AAA accessibility standards. My choices quickly fell apart. It was eye-opening. I had to start again with restrictions and use size, weight and placement instead to construct the visual hierarchy.\nEven now, I make mistakes. On a recent project, I used large photographs on an organization\u2019s website to promote their products. Knowing that our team had control over the art direction, I felt confident that we could compose the photographs to work with text overlays. Despite our best effort, the cropped images weren\u2019t always consistent, undermining the text\u2019s legibility. If I had the chance to do it again, I would separate the text and image.\nSo, what practical things can we consider to give our users the experience they deserve?\nPut guidelines in place\n\nThink about your brand values. Write down keywords and use them as a framework when choosing a typeface. Explore colors that convey the organization\u2019s personality and emotional appeal.\nDefine a color palette that is web-ready and meets minimum accessibility standards. Note which colors are suitable for use with text. Only very dark hues of grey are consistently legible so keep them for non-essential text (for example, as placeholders in form inputs).\nFind which background colors you can safely use with white text, and consider integrating contrast checks into your workflow.\nUse roman and medium weights for body copy. Reserve lighter weights of a typeface for very large text. Thin fonts are usually the first to break down because of aliasing differences across platforms and screens.\nCheck that the size, leading and length of your type is always legible and readable. Define lower and upper limits. Small text is best left for captions and words in uppercase.\nAvoid overlaying text on images unless it\u2019s guaranteed to be legible. If it\u2019s necessary to optimize space in the layout, give the text a container. Scrims aren\u2019t always reliable: the text will inevitably overlap a part of the photograph without a contrasting ground.\n\nTest your work\n\nReview legibility and contrast on different devices. It\u2019s just as important as testing the layout of a responsive website. If you have a local device lab, pay it a visit.\nFind a computer monitor near a window when the sun is shining. Step outside the studio and try to read your content on a mobile device with different brightness levels. \nAsk your friends and family what they use at home and at work. It\u2019s one way of making sure your feedback isn\u2019t always coming from a closed loop.\n\nPush your limits\n\nYou define what the user sees. If you\u2019ve inherited brand guidelines, question them. If you don\u2019t agree with the choices, make the case for why they should change.\nExperiment with size, weight and color to find contrast. Objects with low contrast appear similar to one another and undermine the visual hierarchy. Weak relationships between figure and ground diminish visual interest. A balanced level of contrast removes ambiguity and creates focal points. It captures and holds our attention.\nIf you\u2019re lost for inspiration, look to graphic design in print. We have a wealth of history, full of examples that excel in using contrast to establish visual hierarchy.\nEmbrace limitations. Use boundaries as an opportunity to explore possibilities.\n\nMore than just a facade\nDesigning with standards encourages legibility and helps to define a strong visual hierarchy. Design without exclusion (through neither negligence or intent) gets around discussions of demographics, speaks to a larger audience and makes good business sense. Following the latest trends not only weakens usability but also hinders a cohesive and distinctive brand.\nUsers will make means when they need to, by increasing browser font sizes or enabling system features for accessibility. But we can do our part to take as much of that burden off of the user and ask less of those who need it most.\nIn architecture, it isn\u2019t buildings that mimic what is fashionable that stand the test of time. Nor do we admire buildings that tack on separate, poorly constructed extensions to meet a bare minimum of safety regulations. We admire architecture that offers well-considered, remarkable, usable spaces with universal access.\nPerhaps we can take inspiration from these spaces. Let\u2019s give our buildings a bold voice and make sure the doors are open to everyone.", "year": "2015", "author": "Mark Mitchell", "author_slug": "markmitchell", "published": "2015-12-13T00:00:00+00:00", "url": "https://24ways.org/2015/designing-with-contrast/", "topic": "design"}