{"rowid": 51, "title": "Blow Your Own Trumpet", "contents": "Even if your own trumpet\u2019s tiny and fell out of a Christmas cracker, blowing it isn\u2019t something that everyone\u2019s good at. Some people find selling themselves and what they do difficult. But, you know what? Boo hoo hoo. If you want people to buy something, the reality is you\u2019d better get good at selling, especially if that something is you.\nFor web professionals, the best place to tell potential business customers or possible employers about what you do is on your own website. You can write what you want and how you want, but that doesn\u2019t make knowing what to write any easier. As a matter of fact, writing for yourself often proves harder than writing for someone else.\nI spent this autumn thinking about what I wanted to say about Stuff & Nonsense on the website we relaunched recently. While I did that, I spoke to other designers about how they struggled to write about their businesses.\nIf you struggle to write well, don\u2019t worry. You\u2019re not on your own. Here are five ways to hit the right notes when writing about yourself and your work.\nBe genuine about who you are\nI\u2019ve known plenty of talented people who run a successful business pretty much single-handed. Somehow they still feel awkward presenting themselves as individuals. They wonder whether describing themselves as a company will give them extra credibility. They especially agonise over using \u201cwe\u201d rather than \u201cI\u201d when describing what they do. These choices get harder when you\u2019re a one-man band trading as a limited company or LLC business entity.\nIf you mainly work alone, don\u2019t describe yourself as anything other than \u201cI\u201d. You might think that saying \u201cwe\u201d makes you appear larger and will give you a better chance of landing bigger and better work, but the moment a prospective client asks, \u201cHow many people are you?\u201d you\u2019ll have some uncomfortable explaining to do. This will distract them from talking about your work and derail your sales process. There\u2019s no need to be anything other than genuine about how you describe yourself. You should be proud to say \u201cI\u201d because working alone isn\u2019t something that many people have the ability, business acumen or talent to do.\nExplain what you actually do\nHow many people do precisely the same job as you? Hundreds? Thousands? The same goes for companies. If yours is a design studio, development team or UX consultancy, there are countless others saying exactly what you\u2019re saying about what you do. Simply stating that you code, design or \u2013 God help me \u2013 \u201chandcraft digital experiences\u201d isn\u2019t enough to make your business sound different from everyone else. Anyone can and usually does say that, but people buy more than deliverables. They buy something that\u2019s unique about you and your business.\nPotentially thousands of companies deliver code and designs the same way as Stuff & Nonsense, but our clients don\u2019t just buy page designs, prototypes and websites from us. They buy our taste for typography, colour and layout, summed up by our \u201cIt\u2019s the taste\u201d tagline and bowler hat tip to the PG Tips chimps. We hope that potential clients will understand what\u2019s unique about us. Think beyond your deliverables to what people actually buy, and sell the uniqueness of that.\nDescribe work in progress\nIt\u2019s sad that current design trends have made it almost impossible to tell one website from another. So many designers now demonstrate finished responsive website designs by pasting them onto iMac, MacBook, iPad and iPhone screens that their portfolios don\u2019t fare much better. Every designer brings their own experience, perspective and process to a project. In my experience, it\u2019s understanding those differences which forms a big part of how a prospective client makes a decision about who to work with. Don\u2019t simply show a prospective client the end result of a previous project; explain your process, the development of your thinking and even the wrong turns you took.\nTraditional case studies, like the one I\u2019ve just written about Stuff & Nonsense\u2019s work for WWF UK, can take a lot of time. That\u2019s probably why many portfolios get out of date very quickly. Designers make new work all the time, so there must be a better way to show more of it more often, to give prospective clients a clearer understanding of what we do. At Stuff & Nonsense our solution was to create a feed where we could post fragments of design work throughout a project. This also meant rewriting our Contract Killer to give us permission to publish work before someone signs it off.\nOutline a client\u2019s experience\nRecently a client took me to one side and offered some valuable advice. She told me that our website hadn\u2019t described anything about the experience she\u2019d had while working with us. She said that knowing more about how we work would\u2019ve helped her make her buying decision.\nWhen a client chooses your business, they\u2019re hoping for more than a successful outcome. They want their project to run smoothly. They want to feel that they made a correct decision when they chose you. If they work for an organisation, they\u2019ll want their good judgement to be recognised too. Our client didn\u2019t recognise her experience because we hadn\u2019t made our own website part of it. Remember, the challenge of creating a memorable user experience starts with selling to the people paying you for it.\nAddress your ideal client\nIt\u2019s important to understand that a portfolio\u2019s job isn\u2019t to document your work, it\u2019s to attract new work from clients you want. Make sure that work you show reflects the work you want, because what you include in your portfolio often leads to more of the same.\nWhen you\u2019re writing for your portfolio and elsewhere on your website, imagine that you\u2019re addressing your ideal client. Picture them sitting opposite and answer the questions they\u2019d ask as you would in conversation. Be direct, funny if that\u2019s appropriate and serious when it\u2019s not. If it helps, ask a friend to read the questions aloud and record what you say in response. This will help make what you write sound natural. I\u2019ve found this technique helps clients write copy too.\nToot your own horn\nSome people confuse expressing confidence in yourself and your work as boastfulness, but in a competitive world the reality is that if you are to succeed, you need to show confidence so that others can show their confidence in you. If you want people to hear you, pick up your trumpet and blow it.", "year": "2015", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2015-12-23T00:00:00+00:00", "url": "https://24ways.org/2015/blow-your-own-trumpet/", "topic": "business"} {"rowid": 59, "title": "Animating Your Brand", "contents": "Let\u2019s talk about how we add animation to our designs, in a way that\u2019s consistent with other aspects of our brand, such as fonts, colours, layouts and everything else.\nAnimating is fun. Adding animation to our designs can bring them to life and make our designs stand out. Animations can show how the pieces of our designs fit together. They provide context and help people use our products.\nAll too often animation is something we tack on at the end. We put a transition on a modal window or sliding menu and we often don\u2019t think about whether that animation is consistent with our overall design.\nStyle guides to the rescue\nA style guide is a document that establishes and enforces style to improve communication. It can cover anything from typography and writing style to ethics and other, broader goals. It might be a static visual document showing every kind of UI, like in the Codecademy.com redesign shown below.\nUI toolkit from \u201cReimagining Codecademy.com\u201d by @mslima\nIt might be a technical reference with code examples. CodePen\u2019s new design patterns and style guide is a great example of this, showing all the components used throughout the website as live code.\nCodePen\u2019s design patterns and style guide\nA style guide gives a wide view of your project, it maintains consistency when adding new content, and we can use our style guide to present animations.\nLiving documents\nStyle guides don\u2019t need to be static. We can use them to show movement. We can share CSS keyframe animations or transitions that can then go into production. We can also explain why animation is there in the first place.\nJust as a style guide might explain why we chose a certain font or layout, we can use style guides to explain the intent behind animation. This means that if someone else wants to create a new component, they will know why animation applies.\nIf you haven\u2019t yet set up a style guide, you might want to take a look at Pattern Lab. It\u2019s a great tool for setting up your own style guide and includes loads of design patterns to get started.\nThere are many style guide articles linked from the excellent, open sourced, Website Style Guide Resources. Anna Debenham also has an excellent pocket book on the subject.\nAdding animation\nBefore you begin throwing animation at all the things, establish the character you want to convey.\nAndrex Puppy (British TV ad from 1994)\nList some words that describe the character you\u2019re aiming for. If it was the Andrex brand, they might have gone for: fun, playful, soft, comforting.\nPerhaps you\u2019re aiming for something more serious, credible and authoritative. Or maybe exciting and intense, or relaxing and meditative. For each scenario, the animations that best represent these words will be different.\nIn the example below, two animations both take the same length of time, but use different timing functions. One eases, and the other bounces around. Either might be good, depending on your needs.\nTiming functions (CodePen)\nExample: Kitman Labs\nWorking with Kitman Labs, we spent a little time working out what words best reflected the brand and came up with the following:\n\nScientific\nPrecise\nFast\nSolid\nDependable\nHelpful\nConsistent\nClear\n\nWith such a list of words in hand, we design animation that fits. We might prefer a tween that moves quickly to its destination over one that drifts slowly or bounces.\nWe can use the list when justifying our use of animation, such as when it helps our customers understand the context of data on the page. Or we may even choose not to animate, when that might make the message inconsistent.\nCreate guidelines\nIf you already have a style guide, adding animation could begin with creating an overview section.\nOne approach is to create a local website and share it within your organisation. We recently set up a local site for this purpose. \nA recent project\u2019s introduction to the topic of animation\nThis document becomes a reference when adding animation to components. Include links to related resources or examples of animation to help demonstrate the animation style you want.\nPrototyping\nYou can explain the intent of your animation style guide with live animations. This doesn\u2019t just mean waving our hands around. We can show animation through prototypes.\nThere are so many prototype tools right now. You could use Invision, Principle, Floid, or even HTML and CSS as embedded CodePens.\nA login flow prototype created in Principle\nThese tools help when trying out ideas and working through several approaches. Create videos, animated GIFs or online demos to share with others. Experiment. Find what works for you and work with whatever lets you get the most ideas out of your head fastest. Iterate and refine an animation before it gets anywhere near production.\nBuild up a collection\nBuild up your guide, one animation at a time.\nSome people prefer to loosely structure a guide with places to put things as they are discovered or invented; others might build it one page at a time \u2013 it doesn\u2019t matter. The main thing is that you collect animations like you would trading cards. Or Pokemon. Keep them ready to play and deliver that explosive result.\nYou could include animated GIFs, or link to videos or even live webpages as examples of animation. The use of animation to help user experience is also covered nicely in Val Head\u2019s UI animation and UX article on A List Apart.\nWhat matters is that you create an organised place for them to be found. Here are some ideas to get started.\nLogos and brandmarks\nMany sites include some subtle form of animation in their logos. This can draw the eye, add some character, or bring a little liveliness to an otherwise static page. Yahoo and Google have been experimenting with animation on their logos. Even a simple bouncing animation, such as the logo on Hop.ie, can add character.\nThe CSS-animated bouncer from Hop.ie\nContent transitions\nAdding content, removing content, showing and hiding messages are all opportunities to use animation. Careful and deliberate use of animation helps convey what\u2019s changing on screen.\nAnimating list items with CSS (CSSAnimation.rocks)\nFor more detail on this, I also recommend \u201cTransitional Interfaces\u201d by Pasquale D\u2019Silva.\nPage transitions\nOn a larger scale than the changes to content, full-page transitions can smooth the flow between sections of a site. Medium\u2019s article transitions are a good example of this.\nMedium-style page transition (Tympanus.net)\nPreparing a layout before the content arrives\nWe can use animation to draw a page before the content is ready, such as when a page calls a server for data before showing it.\nOptimistic loading grid (CodePen)\nSometimes it\u2019s good to show something to let the user know that everything\u2019s going well. A short animation could cover just enough time to load the initial content and make the loading transition feel seamless.\nInteractions\nHover effects, dropdown menus, slide-in menus and active states on buttons and forms are all opportunities. Look for ways you can remove the sudden changes and help make the experience of using your UI feel smoother.\nForm placeholder animation (Studio MDS)\nKeep animation visible\nIt takes continuous effort to maintain a style guide and keep it up to date, but it\u2019s worth it. Make it easy to include animation and related design decisions in your documentation and you\u2019ll be more likely to do so. If you can make it fun, and be proud of the result, better still.\nWhen updating your style guide, be sure to show the animations at the same time. This might mean animated GIFs, videos or live embedded examples of your components.\nBy doing this you can make animation integral to your design process and make sure it stays relevant.\nInspiration and resources\nThere are loads of great resources online to help you get started. One of my favourites is IBM\u2019s design language site.\nIBM\u2019s design language:\u200aanimation design guidelines\nIBM describes how animation principles apply to its UI work and components. They break down the animations into five categories of animations and explain how they apply to each example.\nThe site also includes an animation library with example videos of animations and links to source code.\nExample component from IBM\u2019s component library\nThe way IBM sets out its aims and methods is helpful not only for their existing designers and developers, but also helps new hires. Furthermore, it\u2019s a good way to show the world that IBM cares about these details.\nAnother popular animation resource is Google\u2019s material design.\nGoogle\u2019s material design documentation\nGoogle\u2019s guidelines cover everything from understanding easing through to creating engaging and useful mobile UI.\nThis approach is visible across many of Google\u2019s apps and software, and has influenced design across much of the web. The site is helpful both for learning about animation and as an showcase of how to illustrate examples.\nFrameworks\nIf you don\u2019t want to create everything from scratch, there are resources you can use to start using animation in your UI. One such resource is Salesforce\u2019s Lightning design system.\nThe system goes further than most guides. It includes a downloadable framework for adding animation to your projects. It has some interesting concepts, such as elevation settings to handle positioning on the z-axis.\nExample of elevation from Salesforce\u2019s Lightning design system\nYou should also check out Animate.css.\n\u201cJust add water\u201d\u200a\u2014\u200aAnimate.css\nAnimate.css gives you a set of predesigned animations you can apply to page elements using classes. If you use JavaScript to add or remove classes, you can then trigger complex animations. It also plays well with scroll-triggering, and tools such as WOW.js.\nLearn, evolve and make it your own\nThere\u2019s a wealth online of information and guides we can use to better understand animation. They can inspire and kick-start our own visual and animation styles. So let\u2019s think of the design of animations just as we do fonts, colours and layouts. Let\u2019s choose animation deliberately, making it part of our style guides.\nMany thanks to Val Head for taking the time to proofread and offer great suggestions for this article.", "year": "2015", "author": "Donovan Hutchinson", "author_slug": "donovanhutchinson", "published": "2015-12-01T00:00:00+00:00", "url": "https://24ways.org/2015/animating-your-brand/", "topic": "design"} {"rowid": 66, "title": "Solve the Hard Problems", "contents": "So, here we find ourselves on the cusp of 2016. We\u2019ve had a good year \u2013 the web is still alive, no one has switched it off yet. Clients still have websites, teenagers still have phone apps, and there continue to be plenty of online brands to meaningfully engage with each day. Good job team, high fives all round.\nAs it\u2019s the time to make resolutions, I wanted to share three small ideas to take into the new year.\nGet good at what you do\n\u201cHow do you get to Carnegie Hall?\u201d the old joke goes. \u201cPractise, practise, practise.\u201d \nWe work in an industry where there is an awful lot to learn. There\u2019s a lot to learn to get started and then once you do, there\u2019s a lot more to learn to keep your skills current. Just when you think you\u2019ve mastered something, it changes.\nThis is true of many industries, of course, but the sheer pace of change for us makes learning not an annual activity, but daily. Learning takes time, and while I\u2019m not convinced that every skill takes the fabled ten thousand hours to master, there is certainly no escaping that to remain current we must reinvest time in keeping our skills up to date.\nPicking where to spend your time\nOne of the hardest aspects of this thing of ours is just choosing what to learn. If you, like me, invested any time in learning the Less CSS preprocessor over the last few years, you\u2019ll probably now be spending your time relearning Sass instead. If you spent time learning Grunt, chances are you\u2019ll now be thinking about whether you should switch to Gulp. It\u2019s not just that there are new types of tools, there are new tools and frameworks to do the things you\u2019re already doing, but, well, differently.\nDeciding what to learn is hard and the costs of backing the wrong horse can seriously mount up; so much so that by the time you\u2019ve learned and then relearned the tools everyone says you need for your job, there\u2019s rarely enough time to spend really getting to know how best to use them.\n\u00a0Practise, practise, practise\nDo you know how you don\u2019t get to Carnegie Hall? By learning a new instrument each week. It takes time and experience to really learn something well. That goes for a new JavaScript framework as much as a violin. If you flit from one shiny new thing to another, you\u2019re destined to produce amateurish work forever.\nLearn the new thing, but then stick with it long enough to get really good at it \u2013 even if Twitter trolls try to convince you it\u2019s not cool. What\u2019s really not cool is living as a forevernoob. \nIf you\u2019re still not sure what to learn, go back to basics. Considering a new CSS or JavaScript framework? Invest that time in learning the underlying CSS or JavaScript really well instead. Those skills will stand the test of time.\nAudience and purpose\nBack when I was in school, my English teacher (a nice Welsh lady, who I appreciate more now than I did back then) used to love to remind us that every piece of writing should have an audience and a purpose. So much so that audience and purpose almost became her catch phrase. For every essay, article or letter, we were reminded to consider who we were writing it for and what we were trying to achieve. \nIt\u2019s something I think about a lot; certainly when writing, but also in almost every other creative endeavour. Asking who is this for and what am I trying to achieve applies equally to designing a logo or website, through to composing music or writing software.\nBeing productive\nIt seems like everyone wants to have a product these days. As someone who used to do client services work and now has a product company, I often talk with people who are interested in taking something they\u2019ve built in-house and turning it into a product. You know the sort of thing: a design agency with its own CMS or project management web app; the very logical thought process of: if this helps our business, maybe others will find it valuable too; the question that inevitably follows: could we turn this into a product?\nWhether consciously or not, the audience and purpose influence nearly every aspect of your creative process. Once written or designed or developed or created, revising a work to change the audience and purpose can be quite a challenge. No matter how much you want to turn the tension-building, atmospheric music for a horror film into a catchy chart hit, it\u2019s going to be a struggle. Yes, it\u2019s music, but that\u2019s neither the audience nor purpose for which it was created.\nThe same is absolutely true for your in-house tools \u2013 those were also designed for a specific audience and purpose. Your in-house CMS would have been designed with an audience of your own development team, who are busy implementing sites for clients. The purpose is to make that team more productive overall, taking into account considerations of maintaining multiple sites on a common codebase, training clients, a more mature and stable platform and all the other benefits of reusing the same code for each project. The audience is your team and the purpose increased productivity.\nThat\u2019s very different from a customer who wants to buy a polished system to use off-the-shelf. If their needs perfectly aligned with yours then they wouldn\u2019t be in the market for your product \u2013 they would have built their own.\nSometimes you hear the advice to \u201cscratch your own itch\u201d when it comes to product design. I don\u2019t completely agree. Got an itch? Great. Find other itchy people and sell them a backscratcher.\nBuilding a product, like designing a website, is a lot of work. It requires knowing your audience and purpose inside out. You can\u2019t fudge it and you can\u2019t just hope you\u2019ll find an audience for some old thing you have lying around.\nAlways consider the audience and purpose for everything you create. It\u2019s often the difference between success and failure.\nSolve the hard problems\nHuman beings have a natural tendency to avoid hard problems. In digital design (websites, software, whatever) the received wisdom is often that we can get 80% of the way towards doing the hard thing by doing something that\u2019s not very hard.\nDo you know what you get at the end of it? Paid. But nothing really great ever happens that way.\nI worked on a client project a while back where one of the big challenges was making full use of the massive image library they had built up over the years. The client had tens of thousands of photographs, along with a fair amount of video and a large MP3 audio library too. If it wasn\u2019t managed carefully, storage sizes would get out of control, content would go unattributed, and everything would get very messy very quickly.\nI could tell from the outset that this aspect of the project was going to be a constant problem. So we tackled it head-on. We designed and built a media management system to hold and process all the assets, and added an API so the content management system could talk to it. Every time the site needed a photo at a new size, it made an API request to the system and everything was handled seamlessly.\nIt was a daunting job to invest all the time and effort in building that dedicated system and API, but it really paid off. Instead of having the constant troubles of a vast library of media, it became one of the strongest parts of the project.\nTurn your hardest problems into your biggest strengths\nThere\u2019s a funny thing about hard problems. The hardest problems are the most fun to solve and have the biggest impact.\nMaybe you\u2019re the sort of person who clocks in for work, does their job and clocks out at 5pm without another thought. But I don\u2019t think you are, because you\u2019re here reading this. If you really love what you do, I don\u2019t think you can be satisfied in your work unless you\u2019re seeking out and working on those hard problems. That\u2019s where the magic is.\n\nThe new year is a helpful time to think about breaking bad habits. Whether it\u2019s smoking a bit less, or going to the gym a bit more, the ticking over of the calendar can provide the motivation for a new start. I have some suggestions for you.\n\nGet good at what you do. Practise your skills and don\u2019t just flit from one shiny thing to the next.\nRemember who you\u2019re doing it for and why. Consider the audience and purpose for everything you create.\nSolve the hard problems. It\u2019s more interesting, more satisfying, and has a greater impact.\n\nAs we move into 2016, these are the things I\u2019m going to continue to work on. Maybe you\u2019d like to join me.", "year": "2015", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2015-12-24T00:00:00+00:00", "url": "https://24ways.org/2015/solve-the-hard-problems/", "topic": "process"} {"rowid": 62, "title": "Being Customer Supportive", "contents": "Every day in customer support is an inbox, a Twitter feed, or a software forum full of new questions. Each is brimming with your customers looking for advice, reassurance, or fixes for their software problems. Each one is an opportunity to take a break from wrestling with your own troublesome tasks and assist someone else in solving theirs.\nSometimes the questions are straightforward and can be answered in a few minutes with a short greeting, a link to a help page, or a prewritten bit of text you use regularly: how to print a receipt, reset a password, or even, sadly, close your account.\nMore often, a support email requires you to spend some time unpacking the question, asking for more information, and writing a detailed personal response, tailored to help that particular user on this particular day.\nHere I offer a few of my own guidelines on how to make today\u2019s email the best support experience for both me and my customer. And even if you don\u2019t consider what you do to be customer support, you might still find the suggestions useful for the next time you need to communicate with a client, to solve a software problem with teammates, or even reach out and ask for help yourself.\n(All the examples appearing in this article are fictional. Any resemblance to quotes from real, software-using persons is entirely coincidental. Except for the bit about Star Wars. That happened.)\nWho\u2019s TAHT girl\nI\u2019ll be honest: I briefly tried making these recommendations into a clever mnemonic like FAST (facial drooping, arm weakness, speech difficulties, time) or PAD (pressure, antiseptic, dressing). But instead, you get TAHT: tone, ask, help, thank. Ah, well.\nAs I work through each message in my support queue, I\n\nlisten to the tone of the email\nask clarifying questions\nbring in extra help as needed\nand thank the customer when the problem is solved.\n\nLet\u2019s open an email and get started!\nLeave your message at the sound of the tone\nWith our enthusiasm for emoji, it can be very hard to infer someone\u2019s tone from plain text. How much time have you spent pondering why your friend responded with \u201cThanks.\u201d instead of \u201cThanks!\u201d? I mean, why didn\u2019t she :grin: or :wink: too?\nOur support customers, however, are often direct about how they\u2019re feeling:\n\nI\u2019m working against a deadline. Need this fixed ASAP!!!!\nThis hasn\u2019t worked in a week and I am getting really frustrated.\nI\u2019ve done this ten times before and it\u2019s always worked. I must be missing something simple.\n\nThey want us to understand the urgency of this from their point of view, just as much as we want to help them in a timely manner. How this information is conveyed gives us an instant sense of whether they are frustrated, angry, or confused\u2014and, just as importantly, how frustrated-angry-confused they are. \nListen to this tone before you start writing your reply. Here are two ways I might open an email:\n\n\u201cI\u2019m sorry that you ran into trouble with this.\u201d\n\u201cSorry you ran into trouble with this!\u201d\n\nThe content is largely the same, but the tone is markedly different. The first version is a serious, staid reaction to the problem the customer is having; the second version is more relaxed, but no less sincere.\nMatching the tone to the sender\u2019s is an important first step. Overusing exclamation points or dropping in too-casual language may further upset someone who is already having a crummy time with your product. But to a cheerful user, a formal reply or an impersonal form response can be off-putting, and damage a good relationship.\nWhen in doubt, I err on the side of being too formal, rather than sending a reply that may be read as flip or insincere. But whichever you choose, matching your correspondent\u2019s tone will make for a more comfortable conversation.\nCatch the ball and throw it back\nOnce you\u2019ve got that tone on lock, it\u2019s time to tackle the question at hand. Let\u2019s see what our customer needs help with today:\n\nI tried everything in the troubleshooting page but I can\u2019t get it to work again. I am on a Mac. Please help.\n\nHmm, not much information here. Now, if I got this short email after helping five other people with the same problem on Mac OS X, I would be sorely tempted to send this customer that common solution in my first reply. I\u2019ve found it\u2019s important to resist the urge to assume this sixth person needs the same answer as the other five, though: there isn\u2019t enough to connect this email to the ones that came before hers. \nInstead, ask a few questions to start. Invest some time to see if there are other symptoms in common, like so:\n\nI\u2019m sorry that you ran into trouble with this! I\u2019ll need a little more information to see what\u2019s happening here.\n[questions]\nThank you for your help.\n\nThose questions are customized for the customer\u2019s issue as much as possible, and can be fairly wide-ranging. They may include asking for log files, getting some screenshots, or simply checking the browser and operating system version she\u2019s using. I\u2019ll ask anything that might make a connection to the previous cases I\u2019ve answered\u2014or, just as importantly, confirm that there isn\u2019t a connection. What\u2019s more, a few well-placed questions may save us both from pursuing the wrong path and building additional frustration. \n(A note on that closing: \u201cThank you for your help\u201d\u2013I often end an email this way when I\u2019ve asked for a significant amount of follow up information. After all, I\u2019m imposing on my customer\u2019s time to run any number of tests. It\u2019s a necessary step, but I feel that thanking them is a nice acknowledgment we\u2019re in this together.)\nHaving said that, though, let\u2019s bring tone back into the mix:\n\nI tried everything in the troubleshooting but I can\u2019t get it to work again. I am on a Mac. I\u2019m working against a deadline. Need this fixed ASAP!!!!\n\nThis customer wants answers now. I\u2019ll still ask for more details, but would consider including the solution to the previous problem in my initial reply as well. (But only if doing so can\u2019t make the situation worse!)\n\nI\u2019m sorry that you ran into trouble with this! I\u2019ll need a little more information to see what\u2019s happening here.\n[questions]\nIf you\u2019d like to try something in the meantime, delete the file named xyz.txt. (If this isn\u2019t the cause of the problem, deleting the file won\u2019t hurt anything.) Here\u2019s how to find that file on your computer:\n[steps]\nLet me know how it goes!\n\nIn the best case, the suggestion works and the customer is on her way. If it doesn\u2019t solve the problem, you will get more information in answer to your questions and can explore other options. And you\u2019ve given the customer an opportunity to be involved in fixing the issue, and some new tools which might come in handy again in the future.\nBring in help\nThe support software I use counts how many emails the customer and I have exchanged, and reports it in a summary line in my inbox. It\u2019s an easy, passive reminder of how long the customer and I have been working together on a problem, especially first thing in the morning when I\u2019m reacquainting myself with my open support cases.\nThree is the smallest number I\u2019ll see there: the customer sends the initial question (1 email); I reply with an answer (2 emails); the customer confirms the problem is solved (3 emails). But the most complicated, stickiest tickets climb into double-digit replies, and anything that stretches beyond a dozen is worthy of a cheer in Slack when we finally get to the root of the problem and get it fixed.\nWhile an extra round of questions and answers will nudge that number higher, it gives me the chance to feel out the technical comfort level of the person I\u2019m helping. If I ask the customer to send some screenshots or log files and he isn\u2019t sure how to do that, I will use that information to adjust my instructions on next steps. I may still ask him to try running a traceroute on his computer, but I\u2019ll break down the steps into a concise, numbered list, and attach screenshots of each step to illustrate it.\nIf the issue at hand is getting complicated, take note if the customer starts to feel out of their depth technically\u2014either because they tell you so directly or because you sense a shift in tone. If that happens, propose bringing some outside help into the conversation:\n\nDo you have a network firewall or do you use any antivirus software? One of those might be blocking a connection that the software needs to work properly; here\u2019s a list of the required connections [link]. If you have an IT department in-house, they should be able to help confirm that none of those are being blocked.\n\nor:\n\nThis error message means you don\u2019t have permission to install the software on your own computer. Is there a systems administrator in the office that may be able to help with this? \n\nFor email-based support cases, I\u2019ll even offer to add someone from their IT department to the thread, so we can discuss the problem together rather than have the customer relay questions and answers back and forth.\nSimilarly, there are occasionally times when my way of describing things doesn\u2019t fit how the customer understands them. Rather than bang our heads against our keyboards, I will ask one of my support colleagues to join the conversation from our side, and see if he can explain things more clearly than I\u2019ve been able to do.\nWe appreciate your business. Please call again\nAnd then, o frabjous day, you get your reward: the reply which says the problem has been solved. \n\nThat worked!! Thank you so much for saving my day!\nI wish I could send you some cookies!\nIf you were here, I would give you my tickets to Star Wars.\n[Reply is an animated gif.]\n\nSometimes the reply is a bit more understated:\n\nThat fixed it. Thanks.\n\nWhether the customer is elated, satisfied, or frankly happy to be done with emailing support, I like to close longer email threads or short, complicated issues with a final thanks and reminder that we\u2019re here to help: \n\nThank you for the update; I\u2019m glad to hear that solved the problem for you! I hope everything goes smoothly for you now, but feel free to email us again if you run into any other questions or problems. Best,\n\nThen mark that support case closed, and move on to the next question. Because even with the most thoughtfully designed software product, there will always be customers with questions for your capable support team to answer.\nTone, ask, help, thank\nSo there you have it: TAHT. Pay attention to tone; ask questions; bring in help; thank your customer.\n(Lack of) catchy mnemonics aside, good customer support is about listening, paying attention, and taking care in your replies. I think it can be summed up beautifully by this quote from Pamela Marie (as tweeted by Chris Coyier):\n\nGolden rule asking a question: imagine trying to answer it \nGolden rule in answering: imagine getting your answer \n\nYou and your teammates are applying a variation of this golden rule in every email you write. You\u2019re the software ambassadors to your customers and clients. You get the brunt of the problems and complaints, but you also get to help fix them. You write the apologies, but you also have the chance to make each person\u2019s experience with your company or product a little bit better for next time.\nI hope that your holidays are merry and bright, and may all your support inboxes be light.", "year": "2015", "author": "Elizabeth Galle", "author_slug": "elizabethgalle", "published": "2015-12-02T00:00:00+00:00", "url": "https://24ways.org/2015/being-customer-supportive/", "topic": "process"} {"rowid": 52, "title": "Git Rebasing: An Elfin Workshop Workflow", "contents": "This year Santa\u2019s helpers have been tasked with making a garland. It\u2019s a pretty simple task: string beads onto yarn in a specific order. When the garland reaches a specific length, add it to the main workshop garland. Each elf has a specific sequence they\u2019re supposed to chain, which is given to them via a work order. (This is starting to sound like one of those horrible calculus problems. I promise it isn\u2019t. It\u2019s worse; it\u2019s about Git.)\nFor the most part, the system works really well. The elves are able to quickly build up a shared chain because each elf specialises on their own bit of garland, and then links the garland together. Because of this they\u2019re able to work independently, but towards the common goal of making a beautiful garland.\nAt first the elves are really careful with each bead they put onto the garland. They check with one another before merging their work, and review each new link carefully. As time crunches on, the elves pour a little more cheer into the eggnog cooler, and the quality of work starts to degrade. Tensions rise as mistakes are made and unkind words are said. The elves quickly realise they\u2019re going to need a system to change the beads out when mistakes are made in the chain.\nThe first common mistake is not looking to see what the latest chain is that\u2019s been added to the main garland. The garland is huge, and it sits on a roll in one of the corners of the workshop. It\u2019s a big workshop, so it is incredibly impractical to walk all the way to the roll to check what the last link is on the chain. The elves, being magical, have set up a monitoring system that allows them to keep a local copy of the main garland at their workstation. It\u2019s an imperfect system though, so the elves have to request a manual refresh to see the latest copy. They can request a new copy by running the command\ngit pull --rebase=preserve\n(They found that if they ran git pull on its own, they ended up with weird loops of extra beads off the main garland, so they\u2019ve opted to use this method.) This keeps the shared garland up to date, which makes things a lot easier. A visualisation of the rebase process is available.\nThe next thing the elves noticed is that if they worked on the main workshop garland, they were always running into problems when they tried to share their work back with the rest of the workshop. It was fine if they were working late at night by themselves, but in the middle of the day, it was horrible. (I\u2019ve been asked not to talk about that time the fight broke out.) Instead of trying to share everything on their local copy of the main garland, the elves have realised it\u2019s a lot easier to work on a new string and then knot this onto the main garland when their pattern repeat is finished. They generate a new string by issuing the following commands:\ngit checkout master\ngit checkout -b 1234_pattern-name\n1234 represents the work order number and pattern-name describes the pattern they\u2019re adding. Each bead is then added to the new link (git add bead.txt) and locked into place (git commit). Each elf repeats this process until the sequence of beads described in the work order has been added to their mini garland.\nTo combine their work with the main garland, the elves need to make a few decisions. If they\u2019re making a single strand, they issue the following commands:\ngit checkout master\ngit merge --ff-only 1234_pattern-name\nTo share their work they publish the new version of the main garland to the workshop spool with the command git push origin master.\nSometimes this fails. Sharing work fails because the workshop spool has gotten new links added since the elf last updated their copy of the main workshop spool. This makes the elves both happy and sad. It makes them happy because it means the other elves have been working too, but it makes them sad because they now need to do a bit of extra work to close their work order. \nTo update the local copy of the workshop spool, the elf first unlinks the chain they just linked by running the command:\ngit reset --merge ORIG_HEAD\nThis works because the garland magic notices when the elves are doing a particularly dangerous thing and places a temporary, invisible bookmark to the last safe bead in the chain before the dangerous thing happened. The garland no longer has the elf\u2019s work, and can be updated safely. The elf runs the command git pull --rebase=preserve and the changes all the other elves have made are applied locally.\nWith these new beads in place, the elf now has to restring their own chain so that it starts at the right place. To do this, the elf turns back to their own chain (git checkout 1234_pattern-name) and runs the command git rebase master. Assuming their bead pattern is completely unique, the process will run and the elf\u2019s beads will be restrung on the tip of the main workshop garland.\nSometimes the magic fails and the elf has to deal with merge conflicts. These are kind of annoying, so the elf uses a special inspector tool to figure things out. The elf opens the inspector by running the command git mergetool to work through places where their beads have been added at the same points as another elf\u2019s beads. Once all the conflicts are resolved, the elf saves their work, and quits the inspector. They might need to do this a few times if there are a lot of new beads, so the elf has learned to follow this update process regularly instead of just waiting until they\u2019re ready to close out their work order.\nOnce their link is up to date, the elf can now reapply their chain as before, publish their work to the main workshop garland, and close their work order:\ngit checkout master\ngit merge --ff-only 1234_pattern-name\ngit push origin master\nGenerally this process works well for the elves. Sometimes, though, when they\u2019re tired or bored or a little drunk on festive cheer, they realise there\u2019s a mistake in their chain of beads. Fortunately they can fix the beads without anyone else knowing. These tools can be applied to the whole workshop chain as well, but it causes problems because the magic assumes that elves are only ever adding to the main chain, not removing or reordering beads on the fly. Depending on where the mistake is, the elf has a few different options.\nLet\u2019s pretend the elf has a sequence of five beads she\u2019s been working on. The work order says the pattern should be red-blue-red-blue-red.\n\nIf the sequence of beads is wrong (for example, blue-blue-red-red-red), the elf can remove the beads from the chain, but keep the beads in her workstation using the command git reset --soft HEAD~5.\n\nIf she\u2019s been using the wrong colours and the wrong pattern (for example, green-green-yellow-yellow-green), she can remove the beads from her chain and discard them from her workstation using the command git reset --hard HEAD~5.\n\nIf one of the beads is missing (for example, red-blue-blue-red), she can restring the beads using the first method, or she can use a bit of magic to add the missing bead into the sequence.\n\nUsing a tool that\u2019s a bit like orthoscopic surgery, she first selects a sequence of beads which contains the problem. A visualisation of this process is available.\nStart the garland surgery process with the command:\ngit rebase --interactive HEAD~4\nA new screen comes up with the following information (the oldest bead is on top):\npick c2e4877 Red bead\npick 9b5555e Blue bead\npick 7afd66b Blue bead\npick e1f2537 Red bead\nThe elf adjusts the list, changing \u201cpick\u201d to \u201cedit\u201d next to the first blue bead:\npick c2e4877 Red bead\nedit 9b5555e Blue bead\npick 7afd66b Blue bead\npick e1f2537 Red bead\nShe then saves her work and quits the editor. The garland magic has placed her back in time at the moment just after she added the first blue bead.\n\nShe needs to manually fix up her garland to add the new red bead. If the beads were files, she might run commands like vim beads.txt and edit the file to make the necessary changes.\nOnce she\u2019s finished her changes, she needs to add her new bead to the garland (git add --all) and lock it into place (git commit). This time she assigns the commit message \u201cRed bead \u2013 added\u201d so she can easily find it.\n\nThe garland magic has replaced the bead, but she still needs to verify the remaining beads on the garland. This is a mostly automatic process which is started by running the command git rebase --continue.\nThe new red bead has been assigned a position formerly held by the blue bead, and so the elf must deal with a merge conflict. She opens up a new program to help resolve the conflict by running git mergetool.\n\nShe knows she wants both of these beads in place, so the elf edits the file to include both the red and blue beads.\n\nWith the conflict resolved, the elf saves her changes and quits the mergetool.\nBack at the command line, the elf checks the status of her work using the command git status.\nrebase in progress; onto 4a9cb9d\nYou are currently rebasing branch '2_RBRBR' on '4a9cb9d'.\n (all conflicts fixed: run \"git rebase --continue\")\n\nChanges to be committed:\n (use \"git reset HEAD ...\" to unstage)\n\n modified: beads.txt\n\nUntracked files:\n (use \"git add ...\" to include in what will be committed)\n\n beads.txt.orig\nShe removes the file added by the mergetool with the command rm beads.txt.orig and commits the edits she just made to the bead file using the commands:\ngit add beads.txt\ngit commit --message \"Blue bead -- resolved conflict\"\n\nWith the conflict resolved, the elf is able to continue with the rebasing process using the command git rebase --continue. There is one final conflict the elf needs to resolve. Once again, she opens up the visualisation tool and takes a look at the two conflicting files.\n\nShe incorporates the changes from the left and right column to ensure her bead sequence is correct.\n\nOnce the merge conflict is resolved, the elf saves the file and quits the mergetool. Once again, she cleans out the backup file added by the mergetool (rm beads.txt.orig) and commits her changes to the garland:\ngit add beads.txt\ngit commit --message \"Red bead -- resolved conflict\"\nand then runs the final verification steps in the rebase process (git rebase --continue).\n\nThe verification process runs through to the end, and the elf checks her work using the command git log --oneline.\n9269914 Red bead -- resolved conflict\n4916353 Blue bead -- resolved conflict\naef0d5c Red bead -- added\n9b5555e Blue bead\nc2e4877 Red bead\nShe knows she needs to read the sequence from bottom to top (the oldest bead is on the bottom). Reviewing the list she sees that the sequence is now correct.\nSometimes, late at night, the elf makes new copies of the workshop garland so she can play around with the bead sequencer just to see what happens. It\u2019s made her more confident at restringing beads when she\u2019s found real mistakes. And she doesn\u2019t mind helping her fellow elves when they run into trouble with their beads. The sugar cookies they leave her as thanks don\u2019t hurt either. If you would also like to play with the bead sequencer, you can get a copy of the branches the elf worked.\n\nOur lessons from the workshop:\n\nBy using rebase to update your branches, you avoid merge commits and keep a clean commit history.\nIf you make a mistake on one of your local branches, you can use reset to take commits off your branch. If you want to save the work, but uncommit it, add the parameter --soft. If you want to completely discard the work, use the parameter, --hard.\nIf you have merged working branch changes to the local copy of your master branch and it is preventing you from pushing your work to a remote repository, remove these changes using the command reset with the parameter --merge ORIG_HEAD before updating your local copy of the remote master branch.\nIf you want to make a change to work that was committed a little while ago, you can use the command rebase with the parameter --interactive. You will need to include how many commits back in time you want to review.", "year": "2015", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2015-12-07T00:00:00+00:00", "url": "https://24ways.org/2015/git-rebasing/", "topic": "code"} {"rowid": 65, "title": "The Accessibility Mindset", "contents": "Accessibility is often characterized as additional work, hard to learn and only affecting a small number of people. Those myths have no logical foundation and often stem from outdated information or misconceptions.\nIndeed, it is an additional skill set to acquire, quite like learning new JavaScript frameworks, CSS layout techniques or new HTML elements. But it isn\u2019t particularly harder to learn than those other skills.\nA World Health Organization (WHO) report on disabilities states that,\n\n[i]ncluding children, over a billion people (or about 15% of the world\u2019s population) were estimated to be living with disability.\n\nBeing disabled is not as unusual as one might think. Due to chronic health conditions and older people having a higher risk of disability, we are also currently paving the cowpath to an internet that we can still use in the future.\nAccessibility has a very close relationship with usability, and advancements in accessibility often yield improvements in the usability of a website. Websites are also more adaptable to users\u2019 needs when they are built in an accessible fashion.\nBeyond the bare minimum\nIn the time of table layouts, web developers could create code that passed validation rules but didn\u2019t adhere to the underlying semantic HTML model. We later developed best practices, like using lists for navigation, and with HTML5 we started to wrap those lists in nav elements. Working with accessibility standards is similar. The Web Content Accessibility Guidelines (WCAG) 2.0 can inform your decision to make websites accessible and can be used to test that you met the success criteria. What it can\u2019t do is measure how well you met them. \nW3C developed a long list of techniques that can be used to make your website accessible, but you might find yourself in a situation where you need to adapt those techniques to be the most usable solution for your particular problem.\nThe checkbox below is implemented in an accessible way: The input element has an id and the label associated with the checkbox refers to the input using the for attribute. The hover area is shown with a yellow background and a black dotted border:\nOpen video\n \nThe label is clickable and the checkbox has an accessible description. Job done, right? Not really. Take a look at the space between the label and the checkbox:\nOpen video\n \nThe gutter is created using a right margin which pushes the label to the right. Users would certainly expect this space to be clickable as well. The simple solution is to wrap the label around the checkbox and the text:\nOpen video\n \nYou can also set the label to display:block; to further increase the clickable area:\nOpen video\n \nAnd while we\u2019re at it, users might expect the whole box to be clickable anyway. Let\u2019s apply the CSS that was on a wrapping div element to the label directly:\nOpen video\n \nThe result enhances the usability of your form element tremendously for people with lower dexterity, using a voice mouse, or using touch interfaces. And we only used basic HTML and CSS techniques; no JavaScript was added and not one extra line of CSS.\n
\n \n
\nButton Example\nThe button below looks like a typical edit button: a pencil icon on a real button element. But if you are using a screen reader or a braille keyboard, the button is just read as \u201cbutton\u201d without any indication of what this button is for.\nOpen video\n A screen reader announcing a button. Contains audio.\nThe code snippet shows why the button is not properly announced:\n\nAn icon font is used to display the icon and no text alternative is given. A possible solution to this problem is to use the title or aria-label attributes, which solves the alternative text use case for screen reader users:\nOpen video\n A screen reader announcing a button with a title.\nHowever, screen readers are not the only way people with and without disabilities interact with websites. For example, users can reset or change font families and sizes at will. This helps many users make websites easier to read, including people with dyslexia. Your icon font might be replaced by a font that doesn\u2019t include the glyphs that are icons. Additionally, the icon font may not load for users on slow connections, like on mobile phones inside trains, or because users decided to block external fonts altogether. The following screenshots show the mobile GitHub view with and without external fonts:\nThe mobile GitHub view with and without external fonts.\nEven if the title/aria-label approach was used, the lack of visual labels is a barrier for most people under those circumstances. One way to tackle this is using the old-fashioned img element with an appropriate alt attribute, but surprisingly not every browser displays the alternative text visually when the image doesn\u2019t load.\n\nProviding always visible text is an alternative that can work well if you have the space. It also helps users understand the meaning of the icons.\n\nThis also reads just fine in screen readers:\nOpen video\n A screen reader announcing the revised button.\nClever usability enhancements don\u2019t stop at a technical implementation level. Take the BBC iPlayer pages as an example: when a user navigates the \u201ccaptioned videos\u201d or \u201caudio description\u201d categories and clicks on one of the videos, captions or audio descriptions are automatically switched on. Small things like this enhance the usability and don\u2019t need a lot of engineering resources. It is more about connecting the usability dots for people with disabilities. Read more about the BBC iPlayer accessibility case study.\nMore information\nW3C has created several documents that make it easier to get the gist of what web accessibility is and how it can benefit everyone. You can find out \u201cHow People with Disabilities Use the Web\u201d, there are \u201cTips for Getting Started\u201d for developers, designers and content writers. And for the more seasoned developer there is a set of tutorials on web accessibility, including information on crafting accessible forms and how to use images in an accessible way.\nConclusion\nYou can only produce a web project with long-lasting accessibility if accessibility is not an afterthought. Your organization, your division, your team need to think about accessibility as something that is the foundation of your website or project. It needs to be at the same level as performance, code quality and design, and it needs the same attention. Users often don\u2019t notice when those fundamental aspects of good website design and development are done right. But they\u2019ll always know when they are implemented poorly.\nIf you take all this into consideration, you can create accessibility solutions based on the available data and bring accessibility to people who didn\u2019t know they\u2019d need it:\nOpen video\n \nIn this video from the latest Apple keynote, the Apple TV is operated by voice input through a remote. When the user asks \u201cWhat did she say?\u201d the video jumps back fifteen seconds and captions are switched on for a brief time. All three, the remote, voice input and captions have their roots in assisting people with disabilities. Now they benefit everyone.", "year": "2015", "author": "Eric Eggert", "author_slug": "ericeggert", "published": "2015-12-17T00:00:00+00:00", "url": "https://24ways.org/2015/the-accessibility-mindset/", "topic": "code"} {"rowid": 54, "title": "Putting My Patterns through Their Paces", "contents": "Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work.\nThe thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me.\nMeet the Teaser\nHere\u2019s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.)\nIn its simplest form, a teaser contains a headline, which links to an article:\n\nFairly straightforward, sure. But it\u2019s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types \u2013 some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile.\n\n Nearly every element visible on this page is built out of our generic \u201cteaser\u201d pattern.\n \nBut the teaser variation I\u2019d like to call out is the one that appears on The Toast\u2019s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline.\n\n The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts.\n \nAnd this is, as it happens, the teaser variation that gave me pause. Back in the old days \u2013 you know, like six months ago \u2013 I probably would\u2019ve marked this module up to match the design. In other words, I would\u2019ve looked at the module\u2019s visual hierarchy (metadata up top, headline and content below) and written the following HTML:\n
\n \n 126 comments\n

Article Title

\n

Lorem ipsum dolor sit amet, consectetur\u2026

\n
\nBut then I caught myself, and realized this wasn\u2019t the best approach.\nMoving Beyond Layout\nSince I\u2019ve started working responsively, there\u2019s a question I work into every step of my design process. Whether I\u2019m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself:\n\nWhat if someone doesn\u2019t browse the web like I do?\n\n\u2026Okay, that doesn\u2019t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it\u2019s been invaluable to so many aspects of my practice. If I\u2019m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I\u2019m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It\u2019s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web.\nAnd that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it\u2019d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they\u2019re associated.\nThat\u2019s why I find it\u2019s helpful to begin designing a pattern\u2019s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I\u2019m trying to support. In other words, if someone\u2019s encountering my design without the CSS I\u2019ve written, what should their experience be?\nSo I took a step back, and came up with a different approach:\n
\n

Article Title

\n \n

\n Lorem ipsum dolor sit amet, consectetur\u2026\n 126 comments\n

\n
\nMuch, much better. This felt like a better match for the content I was designing: the headline \u2013 easily most important element \u2013 was at the top, followed by the author\u2019s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that\u2019s why it\u2019s at the very end of the excerpt, the last element within our teaser. And with some light styling, we\u2019ve got a respectable-looking hierarchy in place:\n\nYeah, you\u2019re right \u2013 it\u2019s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we\u2019ll bolster the markup with an extra element around our title and byline:\n
\n \n \u2026\n
\nWith that in place, we can use flexbox to tweak our layout, like so:\n.teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\nflex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children.\n\nGetting closer! But as great as flexbox is, it doesn\u2019t do anything for elements outside our container, like our little comment count, which is, as you\u2019ve probably noticed, still stranded at the very bottom of our teaser.\nFlexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser\u2019s using a sufficiently modern version of flexbox. Here\u2019s the one we used:\nvar doc = document.body || document.documentElement;\nvar style = doc.style;\n\nif ( style.webkitFlexWrap == '' ||\n style.msFlexWrap == '' ||\n style.flexWrap == '' ) {\n doc.className += \" supports-flex\";\n}\nEagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser\u2019s design:\n.supports-flex .teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\n.supports-flex .teaser .comment-count {\n position: absolute;\n right: 0;\n top: 1.1em;\n}\nIf the supports-flex class is present, we can apply our flexbox layout to the title area, sure \u2013 but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don\u2019t meet our threshold for our advanced styles are left with an attractive design that matches our HTML\u2019s content hierarchy; but the ones that pass our test receive the finished, final design.\n\nAnd with that, our teaser\u2019s complete.\nDiving Into Device-Agnostic Design\nThis is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I\u2019d recommend Heydon Pickering\u2019s \u201cFlexbox Grid Finesse\u201d, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you\u2019d be right! In fact, it\u2019s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I\u2019ve found that thinking about my design as existing in broad experience tiers \u2013 in layers \u2013 is one of the best ways of designing for the modern web. And what\u2019s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well.\nOpen video\n \n Even a simple search form can be conditionally enhanced, given a little layered thinking.\n \nThis more layered approach to interface design isn\u2019t a new one, mind you: it\u2019s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I\u2019ve found to practice responsive design. As Trent Walton once wrote,\n\nLike cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web\u2019s inherent variability.\n\nWe have a weird job, working on the web. We\u2019re designing for the latest mobile devices, sure, but we\u2019re increasingly aware that our definition of \u201csmartphone\u201d is much too narrow. Browsers have started appearing on our wrists and in our cars\u2019 dashboards, but much of the world\u2019s mobile data flows over sub-3G networks. After all, the web\u2019s evolution has never been charted along a straight line: it\u2019s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don\u2019t yet know about, a more device-agnostic, more layered design process can better prepare our patterns \u2013 and ourselves \u2013 for the future.\n(It won\u2019t help you get enough to eat at holiday parties, though.)", "year": "2015", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2015-12-10T00:00:00+00:00", "url": "https://24ways.org/2015/putting-my-patterns-through-their-paces/", "topic": "code"} {"rowid": 71, "title": "Upping Your Web Security Game", "contents": "When I started working in web security fifteen years ago, web development looked very different. The few non-static web applications were built using a waterfall process and shipped quarterly at best, making it possible to add security audits before every release; applications were deployed exclusively on in-house servers, allowing Info Sec to inspect their configuration and setup; and the few third-party components used came from a small set of well-known and trusted providers. And yet, even with these favourable conditions, security teams were quickly overwhelmed and called for developers to build security in.\nIf the web security game was hard to win before, it\u2019s doomed to fail now. In today\u2019s web development, every other page is an application, accepting inputs and private data from users; software is built continuously, designed to eliminate manual gates, including security gates; infrastructure is code, with servers spawned with little effort and even less security scrutiny; and most of the code in a typical application is third-party code, pulled in through open source repositories with rarely a glance at who provided them.\nSecurity teams, when they exist at all, cannot solve this problem. They are vastly outnumbered by developers, and cannot keep up with the application\u2019s pace of change. For us to have a shot at making the web secure, we must bring security into the core. We need to give it no less attention than that we give browser compatibility, mobile design or web page load times. More broadly, we should see security as an aspect of quality, expecting both ourselves and our peers to address it, and taking pride when we do it well.\nWhere To Start?\nEmbracing security isn\u2019t something you do overnight.\nA good place to start is by reviewing things you\u2019re already doing \u2013 and trying to make them more secure. Here are three concrete steps you can take to get going.\nHTTPS\nThreats begin when your system interacts with the outside world, which often means HTTP. As is, HTTP is painfully insecure, allowing attackers to easily steal and manipulate data going to or from the server. HTTPS adds a layer of crypto that ensures the parties know who they\u2019re talking to, and that the information exchanged can be neither modified nor sniffed.\nHTTPS is relevant to any site. If your non-HTTPS site holds opinions, reading it may get your users in trouble with employers or governments. If your users believe what you say, attackers can modify your non-HTTPS to take advantage of and abuse that trust. If you want to use new browser technologies like HTTP2 and service workers, your site will need to be HTTPS. And if you want to be discovered on the web, using HTTPS can help your Google ranking. For more details on why I think you should make the switch to HTTPS, check out this post, these slides and this video.\nUsing HTTPS is becoming easier and cheaper. Here are a few free tools that can help:\n\nGet free and easy HTTPS delivery from Cloudflare (be sure to use \u201cFull SSL\u201d!)\nGet a free and automation-friendly certificate from Let\u2019s Encrypt (now in open beta).\nTest how well your HTTPS is set up using SSLTest.\n\nOther vendors and platforms are rapidly simplifying and reducing the cost of their HTTPS offering, as demand and importance grows.\nTwo-Factor Authentication\nThe most sensitive data is usually stored behind a login, and the authentication process is the primary gate in front of this data. Making this process secure has many aspects, including using HTTPS when accepting credentials, having a strong password policy, never storing the password, and more.\nAll of these are important, but the best single step to boost your authentication security is to introduce two-factor authentication (2FA). Adding 2FA usually means prompting users for an additional one-time code when logging in, which they get via SMS or a mobile app (e.g. Google Authenticator). This code is short-lived and is extremely hard for a remote attacker to guess, thus vastly reducing the risk a leaked or easily guessed password presents.\nThe typical algorithm for 2FA is based on an IETF standard called the time-based one-time password (TOTP) algorithm, and it isn\u2019t that hard to implement. Joel Franusic wrote a great post on implementing 2FA; modules like speakeasy make it even easier; and you can swap SMS with Google Authenticator or your own app if you prefer. If you don\u2019t want to build 2FA support yourself, you can purchase two/multi-factor authentication services from vendors such as DuoSecurity, Auth0, Clef, Hypr and others.\nIf implementing 2FA still feels like too much work, you can also choose to offload your entire authentication process to an OAuth-based federated login. Many companies offer this today, including Facebook, Google, Twitter, GitHub and others. These bigger players tend to do authentication well and support 2FA, but you should consider what data you\u2019re sharing with them in the process.\nTracking Known Vulnerabilities\nMost of the code in a modern application was actually written by third parties, and pulled into your app as frameworks, modules and libraries. While using these components makes us much more productive, along with their functionality we also adopt their security flaws. To make things worse, some of these flaws are well-known vulnerabilities, making it easy for hackers to take advantage of them in an attack.\nThis is a real problem and happens on pretty much every platform. Do you develop in Java? In 2014, over 6% of Java modules downloaded from Maven had a known severe security issue, the typical Java applications containing 24 flaws. Are you coding in Node.js? Roughly 14% of npm packages carry a known vulnerability, and over 60% of dev shops find vulnerabilities in their code. 30% of Docker Hub containers include a high priority known security hole, and 60% of the top 100,000 websites use client-side libraries with known security gaps.\nTo find known security issues, take stock of your dependencies and match them against language-specific lists such as Snyk\u2019s vulnerability DB for Node.js, rubysec for Ruby, victims-db for Python and OWASP\u2019s Dependency Check for Java. Once found, you can fix most issues by upgrading the component in question, though that may be tricky for indirect dependencies. \nThis process is still way too painful, which means most teams don\u2019t do it. The Snyk team and I are hoping to change that by making it as easy as possible to find, fix and monitor known vulnerabilities in your dependencies. Snyk\u2019s wizard will help you find and fix these issues through guided upgrades and patches, and adding Snyk\u2019s test to your continuous integration and deployment (CI/CD) will help you stay secure as your code evolves.\nNote that newly disclosed vulnerabilities usually impact old code \u2013 the one you\u2019re running in production. This means you have to stay alert when new vulnerabilities are disclosed, so you can fix them before attackers can exploit them. You can do so by subscribing to vulnerability lists like US-CERT, OSVDB and NVD. Snyk\u2019s monitor will proactively let you know about new disclosures relevant to your code, but only for Node.js for now \u2013 you can register to get updated when we expand.\nSecuring Yourself\nIn addition to making your application secure, you should make the contributors to that application secure \u2013 including you. Earlier this year we\u2019ve seen attackers target mobile app developers with a malicious Xcode. The real target, however, wasn\u2019t these developers, but rather the users of the apps they create. That you create. Securing your own work environment is a key part of keeping your apps secure, and your users from being compromised.\nThere\u2019s no single step that will make you fully secure, but here are a few steps that can make a big impact:\n\nUse 2FA on all the services related to the application, notably source control (e.g. GitHub), cloud platform (e.g. AWS), CI/CD, CDN, DNS provider and domain registrar. If an attacker compromises any one of those, they could modify or replace your entire application. I\u2019d recommend using 2FA on all your personal services too.\nUse a password manager (e.g. 1Password, LastPass) to ensure you have a separate and complex password for each service. Some of these services will get hacked, and passwords will leak. When that happens, don\u2019t let the attackers access your other systems too.\nSecure your workstation. Be careful what you download, lock your screen when you walk away, change default passwords on services you install, run antivirus software, etc. Malware on your machine can translate to malware in your applications.\nBe very wary of phishing. Smart attackers use \u2018spear phishing\u2019 techniques to gain access to specific systems, and can trick even security savvy users. There are even phishing scams targeting users with 2FA. Be alert to phishy emails.\nDon\u2019t install things through curl | sudo bash, especially if the URL is on GitHub, meaning someone else controls it. Don\u2019t do it on your machines, and definitely don\u2019t do it in your CI/CD systems. Seriously.\n\nStaying secure should be important to you personally, but it\u2019s doubly important when you have privileged access to an application. Such access makes you a way to reach many more users, and therefore a more compelling target for bad actors.\nA Culture of Security\nUsing HTTPS, enabling two-factor authentication and fixing known vulnerabilities are significant steps in building security at your core. As you implement them, remember that these are just a few steps in a longer journey.\nThe end goal is to embrace security as an aspect of quality, and accept we all share the responsibility of keeping ourselves \u2013 and our users \u2013 safe.", "year": "2015", "author": "Guy Podjarny", "author_slug": "guypodjarny", "published": "2015-12-11T00:00:00+00:00", "url": "https://24ways.org/2015/upping-your-web-security-game/", "topic": "code"} {"rowid": 60, "title": "What\u2019s Ahead for Your Data in 2016?", "contents": "Who owns your data? Who decides what can you do with it? Where can you store it? What guarantee do you have over your data\u2019s privacy? Where can you publish your work? Can you adapt software to accommodate your disability? Is your tiny agency subject to corporate regulation? Does another country have rights over your intellectual property?\nIf you aren\u2019t the kind of person who is interested in international politics, I hate to break it to you: in 2016 the legal foundations which underpin our work on the web are being revisited in not one but three major international political agreements, and every single one of those questions is up for grabs. These agreements \u2013 the draft EU Data Protection Regulation (EUDPR), the Trans-Pacific Partnership (TPP), and the draft Transatlantic Trade and Investment Partnership (TTIP) \u2013 stand poised to have a major impact on your data, your workflows, and your digital rights. While some proposed changes could protect the open web for the future, other provisions would set the internet back several decades.\nIn this article we will review the issues you need to be aware of as a digital professional. While each of these agreements covers dozens of topics ranging from climate change to food safety, we will focus solely on the aspects which pertain to the work we do on the web.\nThe Trans-Pacific Partnership\nThe Trans-Pacific Partnership (TPP) is a free trade agreement between the US, Japan, Malaysia, Vietnam, Singapore, Brunei, Australia, New Zealand, Canada, Mexico, Chile and Peru \u2013 a bloc comprising 40% of the world\u2019s economy. The agreement is expected to be signed by all parties, and thereby to come into effect, in 2016. This agreement is ostensibly about the bloc and its members working together for their common interests. However, the latest draft text of the TPP, which was formulated entirely in secret, has only been made publicly available on a Medium blog published by the U.S. Trade Representative which features a patriotic banner at the top proclaiming \u201cTPP: Made in America.\u201d The message sent about who holds the balance of power in this agreement, and whose interests it will benefit, is clear.\nBy far the most controversial area of the TPP has centred around the provisions on intellectual property. These include copyright terms of up to 120 years, mandatory takedowns of allegedly infringing content in response to just one complaint regardless of that complaint\u2019s validity, heavy and disproportionate penalties for alleged violations, and \u2013 most frightening of all \u2013 government seizures of equipment allegedly used for copyright violations. All of these provisions have been raised without regard for the fact that a trade agreement is not the appropriate venue to negotiate intellectual property law.\nOther draft TPP provisions would restrict the digital rights of people with disabilities by banning the workarounds they use every day. These include no exemptions for the adaptations of copywritten works for use in accessible technology (such as text-to-speech in ebook readers), a ban on circumventing DRM or digital locks in order to convert a file to an accessible format, and requiring the takedown of adapted works, such as a video with added subtitles, even if that adaptation would normally have fallen under the definition of fair use.\nThe e-commerce provisions would prohibit data localisation, the practice of requiring data to be physically stored on servers within a country\u2019s borders. Data localisation is growing in popularity following the Snowden revelations, and some of your own personal data may have been recently \u201clocalised\u201d in response to the Safe Harbor verdict. Prohibiting data localisation through the TPP would address the symptom but not the cause.\nThe Electronic Frontier Foundation has published an excellent summary of the digital rights issues raised by the agreement along with suggested actions American readers can take to speak out.\nTransatlantic Trade and Investment Partnership\nTTIP stands for the Transatlantic Trade and Investment Partnership, a draft free trade agreement between the United States and the EU. The plan has been hugely controversial and divisive, and the internet and digital provisions of the draft form just a small part of that contention.\nThe most striking digital provision of TTIP is an attempt to circumvent and override European data protection law. As EDRI, a European digital rights organisation, noted:\n\n\u201cthe US proposal would authorise the transfer of EU citizens\u2019 personal data to any country, trumping the EU data protection framework, which ensures that this data can only be transferred in clearly defined circumstances. For years, the US has been trying to bypass the default requirement for storage of personal data in the EU. It is therefore not surprising to see such a proposal being {introduced} in the context of the trade negotiations.\u201d\n\nThis draft provision was written before the Safe Harbor data protection agreement between the EU and US was invalidated by the Court of Justice of the European Union. In other words, there is no longer any protective agreement in place, and our data is as vulnerable as this political situation. However, data protection is a matter of its own law, the acting Data Protection Directive and the draft EU Data Protection Reform. A trade agreement, be it the TTIP or the TPP, is not the appropriate place to revamp a law on data protection.\nOther digital law issues raised by TTIP include the possibility of renegotiating standards on encryption (which in practice means lowering them) and renegotiating intellectual property rights such as copyright. The spectre of net neutrality has even put in an appearance, with an attempt to introduce rules on access to the internet itself being introduced as provisions.\nTTIP is still under discussion, and this month the EU trade representative said that \u201cwe agreed to further intensify our work during 2016 to help negotiations move forward rapidly.\u201d This has been cleverly worded: this means the agreement has little chance of being passed or coming into effect in 2016, which buys civil society more precious time to speak out.\nThe EU Data Protection Regulation\nOn 15 December 2015 the European Commission announced their agreement on the text of the draft General Data Protection Regulation. This law will replace its predecessor, the EU Data Protection Regulation of 1995, which has done a remarkable job of protecting data privacy across the continent throughout two decades of constant internet evolution.\nThe goal of the reform process has been to return power over data, and its uses, to citizens. Users will have more control over what data is captured about them, how it is used, how it is retained, and how it can be deleted. Businesses and digital professionals, in turn, will have to restructure their relationships with client and customer data. Compliance obligations will increase, and difficult choices will have to be made. However, this time should be seen as an opportunity to rethink our relationship with data. After Snowden, Schrems, and Safe Harbor, it is clear that we cannot go back to the way things were before. In an era of where every one of our heartbeats is recorded on a wearable device and uploaded to a surveilled data centre in another country, the need for reform has never been more acute.\nWhile texts of the draft GDPR are available, there is not enough mulled wine in the world that will help you get through them. Instead, the law firm Fieldfisher Waterhouse has produced this helpful infographic which will give you a good idea of the changes we can expect to see (view full size):\n\nThe most surprising outcome announced on 15 December was the new regulation\u2019s teeth. Under the new law, companies that fail to heed the updated data protection rules will face fines of up to 4% of their global turnover. Additionally, the law expands the liability for data protection to both the controller (the company hosting the data) and the data processor (the company using the data). The new law will also introduce a one-stop shop for resolving concerns over data misuse. Companies will no longer be able to headquarter their European operations in countries which are perceived to have relatively light-touch data protection enforcement (that means you, Ireland) as a means of automatically rejecting any complaints filed by citizens outside that country.\nFor digital professionals, the most immediate concern is analytics. In fact, I am going to make a prediction: in 2016 we will begin to see the same misguided war on analytics that we saw on cookies. By increasing the legal liabilities for both data processors and controllers \u2013 in other words, the company providing the analytics as well as the site administrator studying them \u2013 the new regulation risks creating disproportionate burdens as well as the same \u201cguilt by association\u201d risks we saw in 2012. There have already been statements made by some within the privacy community that analytics are tracking, and tracking is surveillance, therefore analytics are evil. Yet \u201cjust don\u2019t use analytics,\u201d as was suggested by one advocate, is simply not an option. European regulators should consult with the web community to gain a clear understanding of why analytics are vital to everyday site administrators, and must find a happy medium that protects users\u2019 data without criminalising every website by default. No one wants a repeat of the crisis of consent, as well as the scaremongering, caused by the cookie law.\nAssuming the text is adopted in 2016, the new EU Data Protection Regulation would not come into effect until 2018. We have a considerable challenge ahead, but we also have plenty of time to get it right.", "year": "2015", "author": "Heather Burns", "author_slug": "heatherburns", "published": "2015-12-21T00:00:00+00:00", "url": "https://24ways.org/2015/whats-ahead-for-your-data-in-2016/", "topic": "business"} {"rowid": 49, "title": "Universal React", "contents": "One of the libraries to receive a huge amount of focus in 2015 has been ReactJS, a library created by Facebook for building user interfaces and web applications.\nMore generally we\u2019ve seen an even greater rise in the number of applications built primarily on the client side with most of the logic implemented in JavaScript. One of the main issues with building an app in this way is that you immediately forgo any customers who might browse with JavaScript turned off, and you can also miss out on any robots that might visit your site to crawl it (such as Google\u2019s search bots). Additionally, we gain a performance improvement by being able to render from the server rather than having to wait for all the JavaScript to be loaded and executed.\nThe good news is that this problem has been recognised and it is possible to build a fully featured client-side application that can be rendered on the server. The way in which these apps work is as follows:\n\nThe user visits www.yoursite.com and the server executes your JavaScript to generate the HTML it needs to render the page.\nIn the background, the client-side JavaScript is executed and takes over the duty of rendering the page.\nThe next time a user clicks, rather than being sent to the server, the client-side app is in control.\nIf the user doesn\u2019t have JavaScript enabled, each click on a link goes to the server and they get the server-rendered content again.\n\nThis means you can still provide a very quick and snappy experience for JavaScript users without having to abandon your non-JS users. We achieve this by writing JavaScript that can be executed on the server or on the client (you might have heard this referred to as isomorphic) and using a JavaScript framework that\u2019s clever enough handle server- or client-side execution. Currently, ReactJS is leading the way here, although Ember and Angular are both working on solutions to this problem.\nIt\u2019s worth noting that this tutorial assumes some familiarity with React in general, its syntax and concepts. If you\u2019d like a refresher, the ReactJS docs are a good place to start.\n\u00a0Getting started\nWe\u2019re going to create a tiny ReactJS application that will work on the server and the client. First we\u2019ll need to create a new project and install some dependencies. In a new, blank directory, run:\nnpm init -y\nnpm install --save ejs express react react-router react-dom\nThat will create a new project and install our dependencies:\n\nejs is a templating engine that we\u2019ll use to render our HTML on the server.\nexpress is a small web framework we\u2019ll run our server on.\nreact-router is a popular routing solution for React so our app can fully support and respect URLs.\nreact-dom is a small React library used for rendering React components.\n\nWe\u2019re also going to write all our code in ECMAScript 6, and therefore need to install BabelJS and configure that too.\nnpm install --save-dev babel-cli babel-preset-es2015 babel-preset-react\nThen, create a .babelrc file that contains the following:\n{\n \"presets\": [\"es2015\", \"react\"]\n}\nWhat we\u2019ve done here is install Babel\u2019s command line interface (CLI) tool and configured it to transform our code from ECMAScript 6 (or ES2015) to ECMAScript 5, which is more widely supported. We\u2019ll need the React transforms when we start writing JSX when working with React.\nCreating a server\nFor now, our ExpressJS server is pretty straightforward. All we\u2019ll do is render a view that says \u2018Hello World\u2019. Here\u2019s our server code:\nimport express from 'express';\nimport http from 'http';\n\nconst app = express();\n\napp.use(express.static('public'));\n\napp.set('view engine', 'ejs');\n\napp.get('*', (req, res) => {\n res.render('index');\n});\n\nconst server = http.createServer(app);\n\nserver.listen(3003);\nserver.on('listening', () => {\n console.log('Listening on 3003');\n});\nHere we\u2019re using ES6 modules, which I wrote about on 24 ways last year, if you\u2019d like a reminder. We tell the app to render the index view on any GET request (that\u2019s what app.get('*') means, the wildcard matches any route).\nWe now need to create the index view file, which Express expects to be defined in views/index.ejs:\n\n\n \n My App\n \n\n \n Hello World\n \n\nFinally, we\u2019re ready to run the server. Because we installed babel-cli earlier we have access to the babel-node executable, which will transform all your code before running it through node. Run this command:\n./node_modules/.bin/babel-node server.js\nAnd you should now be able to visit http://localhost:3003 and see \u2018Hello World\u2019 right there:\n\nBuilding the React app\nNow we\u2019ll build the React application entirely on the server, before adding the client-side JavaScript right at the end. Our app will have two routes, / and /about which will both show a small amount of content. This will demonstrate how to use React Router on the server side to make sure our React app plays nicely with URLs.\nFirstly, let\u2019s update views/index.ejs. Our server will figure out what HTML it needs to render, and pass that into the view. We can pass a value into our view when we render it, and then use EJS syntax to tell it to output that data. Update the template file so the body looks like so:\n\n <%- markup %>\n\nNext, we\u2019ll define the routes we want our app to have using React Router. For now we\u2019ll just define the index route, and not worry about the /about route quite yet. We could define our routes in JSX, but I think for server-side rendering it\u2019s clearer to define them as an object. Here\u2019s what we\u2019re starting with:\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n }\n ]\n}\nThese are just placed at the top of server.js, after the import statements. Later we\u2019ll move these into a separate file, but for now they are fine where they are.\nNotice how I define first that the AppComponent should be used at the '' path, which effectively means it matches every single route and becomes a container for all our other components. Then I give it a child route of /, which will match the IndexComponent. Before we hook these routes up with our server, let\u2019s quickly define components/app.js and components/index.js. app.js looks like so:\nimport React from 'react';\n\nexport default class AppComponent extends React.Component {\n render() {\n return (\n
\n

Welcome to my App

\n { this.props.children }\n
\n );\n }\n}\nWhen a React Router route has child components, they are given to us in the props under the children key, so we need to include them in the code we want to render for this component. The index.js component is pretty bland:\nimport React from 'react';\n\nexport default class IndexComponent extends React.Component {\n render() {\n return (\n
\n

This is the index page

\n
\n );\n }\n}\nServer-side routing with React Router\nHead back into server.js, and firstly we\u2019ll need to add some new imports:\nimport React from 'react';\nimport { renderToString } from 'react-dom/server';\nimport { match, RoutingContext } from 'react-router';\n\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\nThe ReactDOM package provides react-dom/server which includes a renderToString method that takes a React component and produces the HTML string output of the component. It\u2019s this method that we\u2019ll use to render the HTML from the server, generated by React. From the React Router package we use match, a function used to find a matching route for a URL; and RoutingContext, a React component provided by React Router that we\u2019ll need to render. This wraps up our components and provides some functionality that ties React Router together with our app. Generally you don\u2019t need to concern yourself about how this component works, so don\u2019t worry too much.\nNow for the good bit: we can update our app.get('*') route with the code that matches the URL against the React routes:\napp.get('*', (req, res) => {\n // routes is our object of React routes defined above\n match({ routes, location: req.url }, (err, redirectLocation, props) => {\n if (err) {\n // something went badly wrong, so 500 with a message\n res.status(500).send(err.message);\n } else if (redirectLocation) {\n // we matched a ReactRouter redirect, so redirect from the server\n res.redirect(302, redirectLocation.pathname + redirectLocation.search);\n } else if (props) {\n // if we got props, that means we found a valid component to render\n // for the given route\n const markup = renderToString();\n\n // render `index.ejs`, but pass in the markup we want it to display\n res.render('index', { markup })\n\n } else {\n // no route match, so 404. In a real app you might render a custom\n // 404 view here\n res.sendStatus(404);\n }\n });\n});\nWe call match, giving it the routes object we defined earlier and req.url, which contains the URL of the request. It calls a callback function we give it, with err, redirectLocation and props as the arguments. The first two conditionals in the callback function just deal with an error occuring or a redirect (React Router has built in redirect support). The most interesting bit is the third conditional, else if (props). If we got given props and we\u2019ve made it this far it means we found a matching component to render and we can use this code to render it:\n...\n} else if (props) {\n // if we got props, that means we found a valid component to render\n // for the given route\n const markup = renderToString();\n\n // render `index.ejs`, but pass in the markup we want it to display\n res.render('index', { markup })\n} else {\n ...\n}\nThe renderToString method from ReactDOM takes that RoutingContext component we mentioned earlier and renders it with the properties required. Again, you need not concern yourself with what this specific component does or what the props are. Most of this is data that React Router provides for us on top of our components.\nNote the {...props}, which is a neat bit of JSX syntax that spreads out our object into key value properties. To see this better, note the two pieces of JSX code below, both of which are equivalent:\n\n\n// OR:\n\nconst props = { a: \"foo\", b: \"bar\" };\n\nRunning the server again\nI know that felt like a lot of work, but the good news is that once you\u2019ve set this up you are free to focus on building your React components, safe in the knowledge that your server-side rendering is working. To check, restart the server and head to http://localhost:3003 once more. You should see it all working!\n\nRefactoring and one more route\nBefore we move on to getting this code running on the client, let\u2019s add one more route and do some tidying up. First, move our routes object out into routes.js:\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\n\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n }\n ]\n}\n\nexport { routes };\nAnd then update server.js. You can remove the two component imports and replace them with:\nimport { routes } from './routes';\nFinally, let\u2019s add one more route for ./about and links between them. Create components/about.js:\nimport React from 'react';\n\nexport default class AboutComponent extends React.Component {\n render() {\n return (\n
\n

A little bit about me.

\n
\n );\n }\n}\nAnd then you can add it to routes.js too:\nimport AppComponent from './components/app';\nimport IndexComponent from './components/index';\nimport AboutComponent from './components/about';\n\nconst routes = {\n path: '',\n component: AppComponent,\n childRoutes: [\n {\n path: '/',\n component: IndexComponent\n },\n {\n path: '/about',\n component: AboutComponent\n }\n ]\n}\n\nexport { routes };\nIf you now restart the server and head to http://localhost:3003/about` you\u2019ll see the about page!\n\nFor the finishing touch we\u2019ll use the React Router link component to add some links between the pages. Edit components/app.js to look like so:\nimport React from 'react';\nimport { Link } from 'react-router';\n\nexport default class AppComponent extends React.Component {\n render() {\n return (\n
\n

Welcome to my App

\n
    \n
  • Home
  • \n
  • About
  • \n
\n { this.props.children }\n
\n );\n }\n}\nYou can now click between the pages to navigate. However, everytime we do so the requests hit the server. Now we\u2019re going to make our final change, such that after the app has been rendered on the server once, it gets rendered and managed in the client, providing that snappy client-side app experience.\nClient-side rendering\nFirst, we\u2019re going to make a small change to views/index.ejs. React doesn\u2019t like rendering directly into the body and will give a warning when you do so. To prevent this we\u2019ll wrap our app in a div:\n\n
<%- markup %>
\n \n\nI\u2019ve also added in a script tag to build.js, which is the file we\u2019ll generate containing all our client-side code.\nNext, create client-render.js. This is going to be the only bit of JavaScript that\u2019s exclusive to the client side. In it we need to pull in our routes and render them to the DOM.\nimport React from 'react';\nimport ReactDOM from 'react-dom';\nimport { Router } from 'react-router';\n\nimport { routes } from './routes';\n\nimport createBrowserHistory from 'history/lib/createBrowserHistory';\n\nReactDOM.render(\n ,\n document.getElementById('app')\n)\nThe first thing you might notice is the mention of createBrowserHistory. React Router is built on top of the history module, a module that listens to the browser\u2019s address bar and parses the new location. It has many modes of operation: it can keep track using a hashbang, such as http://localhost/#!/about (this is the default), or you can tell it to use the HTML5 history API by calling createBrowserHistory, which is what we\u2019ve done. This will keep the URLs nice and neat and make sure the client and the server are using the same URL structure. You can read more about React Router and histories in the React Router documentation.\nFinally we use ReactDOM.render and give it the Router component, telling it about all our routes, and also tell ReactDOM where to render, the #app element.\nGenerating build.js\nWe\u2019re actually almost there! The final thing we need to do is generate our client side bundle. For this we\u2019re going to use webpack, a module bundler that can take our application, follow all the imports and generate one large bundle from them. We\u2019ll install it and babel-loader, a webpack plugin for transforming code through Babel.\nnpm install --save-dev webpack babel-loader\nTo run webpack we just need to create a configuration file, called webpack.config.js. Create the file in the root of our application and add the following code:\nvar path = require('path');\nmodule.exports = {\n entry: path.join(process.cwd(), 'client-render.js'),\n output: {\n path: './public/',\n filename: 'build.js'\n },\n module: {\n loaders: [\n {\n test: /.js$/,\n loader: 'babel'\n }\n ]\n }\n}\nNote first that this file can\u2019t be written in ES6 as it doesn\u2019t get transformed. The first thing we do is tell webpack the main entry point for our application, which is client-render.js. We use process.cwd() because webpack expects an exact location \u2013 if we just gave it the string \u2018client-render.js\u2019, webpack wouldn\u2019t be able to find it.\nNext, we tell webpack where to output our file, and here I\u2019m telling it to place the file in public/build.js. Finally we tell webpack that every time it hits a file that ends in .js, it should use the babel-loader plugin to transform the code first.\nNow we\u2019re ready to generate the bundle!\n./node_modules/.bin/webpack\nThis will take a fair few seconds to run (on my machine it\u2019s about seven or eight), but once it has it will have created public/build.js, a client-side bundle of our application. If you restart your server once more you\u2019ll see that we can now navigate around our application without hitting the server, because React on the client takes over. Perfect!\nThe first bundle that webpack generates is pretty slow, but if you run webpack -w it will go into watch mode, where it watches files for changes and regenerates the bundle. The key thing is that it only regenerates the small pieces of the bundle it needs, so while the first bundle is very slow, the rest are lightning fast. I recommend leaving webpack constantly running in watch mode when you\u2019re developing.\nConclusions\nFirst, if you\u2019d like to look through this code yourself you can find it all on GitHub. Feel free to raise an issue there or tweet me if you have any problems or would like to ask further questions.\nNext, I want to stress that you shouldn\u2019t use this as an excuse to build all your apps in this way. Some of you might be wondering whether a static site like the one we built today is worth its complexity, and you\u2019d be right. I used it as it\u2019s an easy example to work with but in the future you should carefully consider your reasons for wanting to build a universal React application and make sure it\u2019s a suitable infrastructure for you.\nWith that, all that\u2019s left for me to do is wish you a very merry Christmas and best of luck with your React applications!", "year": "2015", "author": "Jack Franklin", "author_slug": "jackfranklin", "published": "2015-12-05T00:00:00+00:00", "url": "https://24ways.org/2015/universal-react/", "topic": "code"} {"rowid": 69, "title": "How to Do a UX Review", "contents": "A UX review is where an expert goes through a website looking for usability and experience problems and makes recommendations on how to fix them. \nI\u2019ve completed a number of UX reviews over my twelve years working as a user experience consultant and I thought I\u2019d share my approach. \nI\u2019ll be talking about reviewing websites here; you can adapt the approach for web apps, or mobile or desktop apps. \nWhy conduct a review\nTypically, a client asks for a review to be undertaken by a trusted and, ideally, detached third party who either works for an agency or is a freelancer. Often they may ask a new member of the UX team to complete one, or even set it as a task for a job interview. This indicates the client is looking for an objective view, seen from the outside as a user would see the website. \nI always suggest conducting some user research rather than a review. Users know their goals and watching them make (what you might think of as) mistakes on the website is invaluable. Conducting research with six users can give you six hours\u2019 worth of review material from six viewpoints. In short, user research can identify more problems and show how common those problems might be. \nThere are three reasons, though, why a review might better suit client needs than user research: \n\nQuick results: user research and analysis takes at least three weeks.\nLimited budget: the \u00a36\u201310,000 cost to run user research is about twice the cost of a UX review. \nUsers are hard to reach: in the business-to-business world, reaching users is difficult, especially if your users hold senior positions in their organisations. Working with consumers is much easier as there are often more of them. \n\nThere is some debate about the benefits of user research over UX review. In my experience you learn far more from research, but opinions differ. \nBe objective\nThe number one mistake many UX reviewers make is reporting back the issues they identify as their opinion. This can cause credibility problems because you have to keep justifying why your opinion is correct. \nI\u2019ve had the most success when giving bad news in a UX review and then finally getting things fixed when I have been as objective as possible, offering evidence for why something may be a problem. \nTo be objective we need two sources of data: numbers from analytics to appeal to reason; and stories from users in the form of personas to speak to emotions. Highlighting issues with dispassionate numerical data helps show the extent of the problem. Making the problems more human using personas can make the problem feel more real. \nNumbers from analytics\nThe majority of clients I work with use Google Analytics, but if you use a different analytics package the same concepts apply. I use analytics to find two sets of things.\n1. Landing pages and search terms\nLanding pages are the pages users see first when they visit a website \u2013 more often than not via a Google search. Landing pages reveal user goals. If a user landed on a page called \u2018Yellow shoes\u2019 their goal may well be to find out about or buy some yellow shoes. \nIt would be great to see all the search terms bringing people to the website but in 2011 Google stopped providing search term data to (rightly!) protect users\u2019 privacy. You can get some search term data from Google Webmaster tools, but we must rely on landing pages as a clue to our users\u2019 goals. \nThe thing to look for is high-traffic landing pages with a high bounce rate. Bounce rate is the percentage of visitors to a website who navigate away from the site after viewing only one page. A high bounce rate (over 50%) isn\u2019t good; above 70% is bad.\nTo get a list of high-traffic landing pages with a high bounce rate install this bespoke report.\nGoogle Analytics showing landing pages ordered by popularity and the bounce rate for each.\nThis is the list of pages with high demand and that have real problems as the bounce rate is high. This is the main focus of the UX review. \n2. User flows\nWe have the beginnings of the user journey: search terms and initial landing pages. Now we can tap into the really useful bit of Google Analytics. Called behaviour flows, they show the most common order of pages visited. \nBehaviour flows from Google Analytics, showing the routes users took through the website.\nHere we can see the second and third (and so on) pages users visited. Importantly, we can also see the drop-outs at each step. \nIf your client has it set up, you can also set goal pages (for example, a post-checkout contact us and thank you page). You can then see a similar view that tracks back from the goal pages. If your client doesn\u2019t have this, suggest they set up goal tracking. It\u2019s easy to do. \nWe now have the remainder of the user journey. \nA user journey\nExpect the work in analytics to take up to a day. \nWe may well identify more than one user journey, starting from different landing pages and going to different second- and third-level pages. That\u2019s a good thing and shows we have different user types. Talking of user types, we need to define who our users are. \nPersonas\nWe have some user journeys and now we need to understand more about our users\u2019 motivations and goals. \nI have a love-hate relationship with personas, but used properly these portraits of users can help bring a human touch to our UX review. \nI suggest using a very cut-down view of a persona. My old friends Steve Cable and Richard Caddick at cxpartners have a great free template for personas from their book Communicating the User Experience.\nThe first thing to do is find a picture that represents that persona. Don\u2019t use crappy stock photography \u2013 it\u2019s sometimes hard to relate to perfect-looking people) \u2013 use authentic-looking people. Here\u2019s a good collection of persona photos. \nAn example persona.\nThe personas have three basic attributes:\n\nGoals: we can complete these drawing on the analytics data we have (see example).\nMusts: things we have to do to meet the persona\u2019s needs.\nMust nots: a list of things we really shouldn\u2019t do. \n\nCompleting points 2 and 3 can often be done during the writing of the report. \nLet\u2019s take an example. We know that the search term \u2018yellow shoes\u2019 takes the user to the landing page for yellow shoes. We also know this page has a high bounce rate, meaning it doesn\u2019t provide a good experience. \nWith our expert hat on we can review the page. We will find two types of problem: \n\nUsability issues: ineffective button placement or incorrect wording, links not looking like links, and so on. \nExperience issues: for example, if a product is out of stock we have to contact the business to ask them to restock. \n\nThat link is very small and hard to see.\nWe could identify that the contact button isn\u2019t easy to find (a usability issue) but that\u2019s not the real problem here. That the user has to ask the business to restock the item is a bad user experience. We add this to our personas\u2019 must nots. The big experience problems with the site form the musts and must nots for our personas. \nWe now have a story around our user journey that highlights what is going wrong. \nIf we\u2019ve identified a number of user journeys, multiple landing pages and differing second and third pages visited, we can create more personas to match. A good rule of thumb is no more than three personas. Any more and they lose impact, watering down your results. \nExpect persona creation to take up to a day to complete. \nLet\u2019s start the review\nWe take the user journeys and we follow them step by step, working through the website looking for the reasons why users drop out at each step. Using Keynote or PowerPoint, I structure the final report around the user journey with separate sections for each step.\nFor each step we\u2019ll find both usability and experience problems. Split the results into those two groups. \nUsability problems are fairly easy to fix as they\u2019re often quick design changes. As you go along, note the usability problems in one place: we\u2019ll call this \u2018quick wins\u2019. Simple quick fixes are a reassuring thing for a client to see and mean they can get started on stuff right away. You can mark the severity of usability issues. Use a scale from 1 to 3 (if you use 1 to 5 everything ends up being a 3!) where 1 is minor and 3 is serious. \nReview the website on the device you\u2019d expect your persona to use. Are they using the site on a smartphone? Review it on a smartphone. \nI allow one page or slide per problem, which allows me to explain what is going wrong. For a usability problem I\u2019ll often make a quick wireframe or sketch to explain how to address it. \nA UX review slide displaying all the elements to be addressed. These slides may be viewed from across the room on a screen so zoom into areas of discussion.\n(Quick tip: if you use Google Chrome, try Awesome Screenshot to capture screens.)\nWhen it comes to the more severe experience problems \u2013 things like an online shop not offering next day delivery, or a business that needs to promise to get back to new customers within a few hours \u2013 these will take more than a tweak to the UI to fix. \nCall these something different. I use the terms like business challenges and customer experience issues as they show that it will take changes to the organisation and its processes to address them. It\u2019s often beyond the remit of a humble UX consultant to recommend how to fix organisational issues, so don\u2019t try. \nAgain, create a page within your document to collect all of the business challenges together. \nExpect the review to take between one and three days to complete. \nThe final report should follow this structure:\n\nThe approach\nOverview of usability quick wins\nOverview of experience issues\nOverview of Google Analytics findings\nThe user journeys \nThe personas\nDetailed page-by-page review (broken down by steps on the user journey)\n\nThere are two academic theories to help with the review. \nHeuristic evaluation is a set of criteria to organise the issues you find. They\u2019re great for categorising the usability issues you identify but in practice they can be quite cumbersome to apply. \nI prefer the more scientific and much simpler cognitive walkthrough that is focused on goals and actions.\nA workshop to go through the findings\nThe most important part of the UX review process is to talk through the issues with your client and their team. \nA document can only communicate a certain amount. Conversations about the findings will help the team understand the severity of the issues you\u2019ve uncovered and give them a chance to discuss what to do about them. \nExpect the workshop to last around three hours.\nWhen presenting the report, explain the method you used to conduct the review, the data sources, personas and the reasoning behind the issues you found. Start by going through the usability issues. Often these won\u2019t be contentious and you can build trust and improve your credibility by making simple, easy to implement changes. \nThe most valuable part of the workshop is conversation around each issue, especially the experience problems. The workshop should include time to talk through each experience issue and how the team will address it. \nI collect actions on index cards throughout the workshop and make a note of who will take what action with each problem. \nIndex cards showing the problem and who is responsible.\nWhen talking through the issues, the person who designed the site is probably in the room \u2013 they may well feel threatened. So be nice.\nWhen I talk through the report I try to have strong ideas, weakly held.\nAt the end of the workshop you\u2019ll have talked through each of the issues and identified who is responsible for addressing them. To close the workshop I hand out the cards to the relevant people, giving them a physical reminder of the next steps they have to take. \nThat\u2019s my process for conducting a review. I\u2019d love to hear any tips you have in the comments.", "year": "2015", "author": "Joe Leech", "author_slug": "joeleech", "published": "2015-12-03T00:00:00+00:00", "url": "https://24ways.org/2015/how-to-do-a-ux-review/", "topic": "ux"} {"rowid": 64, "title": "Being Responsive to the Small Things", "contents": "It\u2019s that time of the year again to trim the tree with decorations. Or maybe a DOM tree?\nAny web page is made of HTML elements that lay themselves out in a tree structure. We start at the top and then have multiple branches with branches that branch out from there. \n\nTo decorate our tree, we use CSS to specify which branches should receive the tinsel we wish to adorn upon it. It\u2019s all so lovely.\nIn years past, this was rather straightforward. But these days, our trees need to be versatile. They need to be responsive!\nResponsive web design is pretty wonderful, isn\u2019t it? Based on our viewport, we can decide how elements on the page should change their appearance to accommodate various constraints using media queries.\nClearleft have a delightfully clean and responsive site\nAlas, it\u2019s not all sunshine, lollipops, and rainbows. \nWith complex layouts, we may have design chunks \u2014 let\u2019s call them components \u2014 that appear in different contexts. Each context may end up providing its own constraints on the design, both in its default state and in its possibly various responsive states.\n\nMedia queries, however, limit us to the context of the entire viewport, not individual containers on the page. For every container our component lives in, we need to specify how to rearrange things in that context. The more complex the system, the more contexts we need to write code for.\n@media (min-width: 800px) {\n .features > .component { }\n .sidebar > .component {}\n .grid > .component {}\n}\nEach new component and each new breakpoint just makes the entire system that much more difficult to maintain. \n@media (min-width: 600px) {\n .features > .component { }\n .grid > .component {}\n}\n\n@media (min-width: 800px) {\n .features > .component { }\n .sidebar > .component {}\n .grid > .component {}\n}\n\n@media (min-width: 1024px) {\n .features > .component { }\n}\nEnter container queries\nContainer queries, also known as element queries, allow you to specify conditional CSS based on the width (or maybe height) of the container that an element lives in. In doing so, you no longer have to consider the entire page and the interplay of all the elements within. \nWith container queries, you\u2019ll be able to consider the breakpoints of just the component you\u2019re designing. As a result, you end up specifying less code and the components you develop have fewer dependencies on the things around them. (I guess that makes your components more independent.)\nAwesome, right?\nThere\u2019s only one catch.\nBrowsers can\u2019t do container queries. There\u2019s not even an official specification for them yet. The Responsive Issues (n\u00e9e Images) Community Group is looking into solving how such a thing would actually work. \nSee, container queries are tricky from an implementation perspective. The contents of a container can affect the size of the container. Because of this, you end up with troublesome circular references. \nFor example, if the width of the container is under 500px then the width of the child element should be 600px, and if the width of the container is over 500px then the width of the child element should be 400px. \nCan you see the dilemma? When the container is under 500px, the child element resizes to 600px and suddenly the container is 600px. If the container is 600px, then the child element is 400px! And so on, forever. This is bad.\nI guess we should all just go home and sulk about how we just got a pile of socks when we really wanted the Millennium Falcon. \nOur saviour this Christmas: JavaScript\nThe three wise men \u2014 Tim Berners-Lee, H\u00e5kon Wium Lie, and Brendan Eich \u2014 brought us the gifts of HTML, CSS, and JavaScript. \nTo date, there are a handful of open source solutions to fill the gap until a browser implementation sees the light of day.\n\nElementary by Scott Jehl\nElementQuery by Tyson Matanich\nEQ.js by Sam Richards\nCSS Element Queries from Marcj\n\nUsing any of these can sometimes feel like your toy broke within ten minutes of unwrapping it.\nEach take their own approach on how to specify the query conditions. For example, Elementary, the smallest of the group, only supports min-width declarations made in a :before selector.\n.mod-foo:before {\n content: \u201c300 410 500\u201d;\n}\nThe script loops through all the elements that you specify, reading the content property and then setting an attribute value on the HTML element, allowing you to use CSS to style that condition. \n.mod-foo[data-minwidth~=\"300\"] {\n background: blue;\n}\nTo get the script to run, you\u2019ll need to set up event handlers for when the page loads and for when it resizes. \nwindow.addEventListener( \"load\", window.elementary, false );\nwindow.addEventListener( \"resize\", window.elementary, false );\nThis works okay for static sites but breaks down on pages where elements can expand or contract, or where new content is dynamically inserted.\nIn the case of EQ.js, the implementation requires the creation of the breakpoints in the HTML. That means that you have implementation details in HTML, JavaScript, and CSS. (Although, with the JavaScript, once it\u2019s in the build system, it shouldn\u2019t ever be much of a concern unless you\u2019re tracking down a bug.)\nAnother problem you may run into is the use of content delivery networks (CDNs) or cross-origin security issues. The ElementQuery and CSS Element Queries libraries need to be able to read the CSS file. If you are unable to set up proper cross-origin resource sharing (CORS) headers, these libraries won\u2019t help.\nAt Shopify, for example, we had all of these problems. The admin that store owners use is very dynamic and the CSS and JavaScript were being loaded from a CDN that prevented the JavaScript from reading the CSS. \nTo go responsive, the team built their own solution \u2014 one similar to the other scripts above, in that it loops through elements and adds or removes classes (instead of data attributes) based on minimum or maximum width.\nThe caveat to this particular approach is that the declaration of breakpoints had to be done in JavaScript. \n elements = [\n { \u2018module\u2019: \u201c.carousel\u201d, \u201cclassName\u201d:\u2019alpha\u2019, minWidth: 768, maxWidth: 1024 },\n { \u2018module\u2019: \u201c.button\u201d, \u201cclassName\u201d:\u2019beta\u2019, minWidth: 768, maxWidth: 1024 } ,\n { \u2018module\u2019: \u201c.grid\u201d, \u201cclassName\u201d:\u2019cappa\u2019, minWidth: 768, maxWidth: 1024 }\n ]\nWith that done, the script then had to be set to run during various events such as inserting new content via Ajax calls. This sometimes reveals itself in flashes of unstyled breakpoints (FOUB). An unfortunate side effect but one largely imperceptible.\nUsing this approach, however, allowed the Shopify team to make the admin responsive really quickly. Each member of the team was able to tackle the responsive story for a particular component without much concern for how all the other components would react. \n\nEach element responds to its own breakpoint that would amount to dozens of breakpoints using traditional breakpoints. This approach allows for a truly fluid and adaptive interface for all screens.\nChristmas is over\nI wish I were the bearer of greater tidings and cheer. It\u2019s not all bad, though. We may one day see browsers implement container queries natively. At which point, we shall all rejoice!", "year": "2015", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2015-12-19T00:00:00+00:00", "url": "https://24ways.org/2015/being-responsive-to-the-small-things/", "topic": "code"} {"rowid": 56, "title": "Helping VIPs Care About Performance", "contents": "Making a site feel super fast is the easy part of performance work. Getting people around you to care about site speed is a much bigger challenge. How do we keep the site fast beyond the initial performance work? Keeping very important people like your upper management or clients invested in performance work is critical to keeping a site fast and empowering other designers and developers to contribute.\nThe work to get others to care is so meaty that I dedicated a whole chapter to the topic in my book Designing for Performance. When I speak at conferences, the majority of questions during Q&A are on this topic. When I speak to developers and designers who care about performance, getting other people at one\u2019s organization or agency to care becomes the most pressing question.\nMy primary response to folks who raise this issue is the question: \u201cWhat metric(s) do your VIPs care about?\u201d This is often met with blank stares and raised eyebrows. But it\u2019s also our biggest clue to what we need to do to help empower others to care about performance and work on it. Every organization and executive is different. This means that three major things vary: the primary metrics VIPs care about; the language they use about measuring success; and how change is enacted. By clueing in to these nuances within your organization, you can get a huge leg up on crafting a successful pitch about performance work.\nLet\u2019s start with the metric that we should measure. Sure, (most) everybody cares about money - but is that really the metric that your VIPs are looking at each day to measure the success or efficacy of your site? More likely, dollars are the end game, but the metrics or key performance indicators (KPIs) people focus on might be:\n\nrate of new accounts created/signups\ncost of acquiring or retaining a customer\nvisitor return rate\nvisitor bounce rate\nfavoriting or another interaction rate\n\nThese are just a few examples, but they illustrate how wide-ranging the options are that people care about. I find that developers and designers haven\u2019t necessarily investigated this when trying to get others to care about performance. We often reach for the obvious \u2013 money! \u2013 but if we don\u2019t use the same kind of language our VIPs are using, we might not get too far. You need to know this before you can make the case for performance work.\nTo find out these metrics or KPIs, start reading through the emails your VIPs are sending within your company. What does it say on company wikis? Are there major dashboards internally that people are looking at where you could find some good metrics? Listen intently in team meetings or thoroughly read annual reports to see what these metrics could be.\nThe second key here is to pick up on language you can effectively copy and paste as you make the case for performance work. You need to be able to reflect back the metrics that people already find important in a way they\u2019ll be able to hear. Once you know your key metrics, it\u2019s time to figure out how to communicate with your VIPs about performance using language that will resonate with them.\nLet\u2019s start with visit traffic as an example metric that a very important person cares about. Start to dig up research that other people and companies have done that correlates performance and your KPI. For example, cite studies:\n\n\u201cWhen the home page of Google Maps was reduced from 100KB to 70\u201380KB, traffic went up 10% in the first week, and an additional 25% in the following three weeks.\u201d (source).\n\nRead through websites like WPOStats, which collects the spectrum of studies on the impact of performance optimization on user experience and business metrics. Tweet and see if others have done similar research that correlates performance and your site\u2019s main KPI.\nOnce you have collected some research that touches on the same kind of language your VIPs use about the success of your site, it\u2019s time to present it. You can start with something simple, like a qualitative description of the work you\u2019re actively doing to improve the site that translates to improved metrics that your VIPs care about. It can be helpful to append a performance budget to any proposal so you can compare the budget to your site\u2019s reality and how it might positively impact those KPIs folks care about.\nWords and graphs are often only half the battle when it comes to getting others to care about performance. Often, videos appeal to folks\u2019 emotions in a way that is missed when glancing through charts and graphs. On A List Apart I recently detailed how to create videos of how fast your site loads. Let\u2019s say that your VIPs care about how your site loads on mobile devices; it\u2019s time to show them how your site loads on mobile networks.\nOpen video\n\nYou can use these videos to make a number of different statements to your VIPs, depending on what they care about:\n\nLook at how slow our site loads versus our competitor!\nLook at how slow our site loads for users in another country!\nLook at how slow our site loads on mobile networks!\n\nAgain, you really need to know which metrics your VIPs care about and tune into the language they\u2019re using. If they don\u2019t care about the overall user experience of your site on mobile devices, then showing them how slow your site loads on 3G isn\u2019t going to work. This will be your sales pitch; you need to practice and iterate on the language and highlights that will land best with your audience. \nTo make your sales pitch as solid as possible, gut-check your ideas on how to present it with other co-workers to get their feedback. Read up on how to construct effective arguments and deliver them; do some research and see what others have done at your company when pitching to VIPs. Are slides effective? Memos or emails? Hallway conversations? Sometimes the best way to change people\u2019s minds is by mentioning it in informal chats over coffee. Emulate the other leaders in your organization who are successful at this work. \nEvery organization and very important person is different. Learn what metrics folks truly care about, study the language that they use, and apply what you\u2019ve learned in a way that\u2019ll land with those individuals. It may take time to craft your pitch for performance work over time, but it\u2019s important work to do. If you\u2019re able to figure out how to mirror back the language and metrics VIPs care about, and connect the dots to performance for them, you will have a huge leg up on keeping your site fast in the long run.", "year": "2015", "author": "Lara Hogan", "author_slug": "larahogan", "published": "2015-12-08T00:00:00+00:00", "url": "https://24ways.org/2015/helping-vips-care-about-performance/", "topic": "business"} {"rowid": 72, "title": "Designing with Contrast", "contents": "When an appetite for aesthetics over usability becomes the bellwether of user interface design, it\u2019s time to reconsider who we\u2019re designing for.\nOver the last few years, we have questioned the signifiers that gave obvious meaning to the function of interface elements. Strong textures, deep shadows, gradients \u2014 imitations of physical objects \u2014 were discarded. And many, rightfully so. Our audiences are now more comfortable with an experience that feels native to the technology, so we should respond in kind.\nYet not all of the changes have benefitted users. Our efforts to simplify brought with them a trend of ultra-minimalism where aesthetics have taken priority over legibility, accessibility and discoverability. The trend shows no sign of losing popularity \u2014 and it is harming our experience of digital content.\n\nA thin veneer\nWe are in a race to create the most subdued, understated interface. Visual contrast is out. In its place: the thinnest weights of a typeface and white text on bright color backgrounds. Headlines, text, borders, backgrounds, icons, form controls and inputs: all grey.\nWhile we can look back over the last decade and see minimalist trends emerging on the web, I think we can place a fair share of the responsibility for the recent shift in priorities on Apple. The release of iOS 7 ushered in a radical change to its user interface. It paired mobile interaction design to the simplicity and eloquence of Apple\u2019s marketing and product design. It was a catalyst. We took what we saw, copied and consumed the aesthetics like pick-and-mix.\nNew technology compounds this trend. Computer monitors and mobile devices are available with screens of unprecedented resolutions. Ultra-light type and subtle hues, difficult to view on older screens, are more legible on these devices. It would be disingenuous to say that designers have always worked on machines representative of their audience\u2019s circumstances, but the gap has never been as large as it is now. We are running the risk of designing VIP lounges where the cost of entry is a Mac with a Retina display.\nMinimalist expectations\nLike progressive enhancement in an age of JavaScript, many good and sensible accessibility practices are being overlooked or ignored. We\u2019re driving unilateral design decisions that threaten accessibility. We\u2019ve approached every problem with the same solution, grasping on to the integrity of beauty, focusing on expression over users\u2019 needs and content. \nSomeone once suggested to me that a client\u2019s website should include two states. The first state would be the ideal experience, with low color contrast, light font weights and no differentiation between links and text. It would be the default. The second state would be presented in whatever way was necessary to meet accessibility standards. Users would have to opt out of the default state via a toggle if it wasn\u2019t meeting their needs. A sort of first-class, upper deck cabin equivalent of graceful degradation. That this would divide the user base was irrelevant, as the aesthetics of the brand were absolute. \nIt may seem like an unusual anecdote, but it isn\u2019t uncommon to see this thinking in our industry. Again and again, we place the burden of responsibility to participate in a usable experience on others. We view accessibility and good design as mutually exclusive. Taking for granted what users will tolerate is usually the forte of monopolistic services, but increasingly we apply the same arrogance to our new products and services.\n\nImitation without representation\nAll of us are influenced in one way or another by one another\u2019s work. We are consciously and unconsciously affected by the visual and audible activity around us. This is important and unavoidable. We do not produce work in a vacuum. We respond to technology and culture. We channel language and geography. We absorb the sights and sounds of film, television, news. To mimic and copy is part and parcel of creating something an audience of many can comprehend and respond to. Our clients often look first to their competitors\u2019 products to understand their success.\nHowever, problems arise when we focus on style without context; form without function; mimicry as method. Copied and reused without any of the ethos of the original, stripped of deliberate and informed decision-making, the so-called look and feel becomes nothing more than paint on an empty facade.\nThe typographic and color choices so in vogue today with our popular digital products and services have little in common with the brands they are meant to represent.\n\nFor want of good design, the message was lost\nThe question to ask is: does the interface truly reflect the product? Is it an accurate characterization of the brand and organizational values? Does the delivery of the content match the tone of voice?\nThe answer is: probably not. Because every organization, every app or service, is unique. Each with its own personality, its own values and wonderful quirks. Design is communication. We should do everything in our role as professionals to use design to give voice to the message. Our job is to clearly communicate the benefits of a service and unreservedly allow access to information and content. To do otherwise, by obscuring with fashionable styles and elusive information architecture, does a great disservice to the people who chose to engage with and trust our products.\nWe can achieve hierarchy and visual rhythm without resorting to extreme reduction. We can craft a beautiful experience with fine detail and curiosity while meeting fundamental standards of accessibility (and strive to meet many more).\nStandards of excellence\nIt isn\u2019t always comfortable to step back and objectively question our design choices. We get lost in the flow of our work, using patterns and preferences we\u2019ve tried and tested before. That our decisions often seem like second nature is a gift of experience, but sometimes it prevents us from finding our blind spots.\nI was first caught out by my own biases a few years ago, when designing an interface for the Bank of England. After deciding on the colors for the typography and interactive elements, I learned that the site had to meet AAA accessibility standards. My choices quickly fell apart. It was eye-opening. I had to start again with restrictions and use size, weight and placement instead to construct the visual hierarchy.\nEven now, I make mistakes. On a recent project, I used large photographs on an organization\u2019s website to promote their products. Knowing that our team had control over the art direction, I felt confident that we could compose the photographs to work with text overlays. Despite our best effort, the cropped images weren\u2019t always consistent, undermining the text\u2019s legibility. If I had the chance to do it again, I would separate the text and image.\nSo, what practical things can we consider to give our users the experience they deserve?\nPut guidelines in place\n\nThink about your brand values. Write down keywords and use them as a framework when choosing a typeface. Explore colors that convey the organization\u2019s personality and emotional appeal.\nDefine a color palette that is web-ready and meets minimum accessibility standards. Note which colors are suitable for use with text. Only very dark hues of grey are consistently legible so keep them for non-essential text (for example, as placeholders in form inputs).\nFind which background colors you can safely use with white text, and consider integrating contrast checks into your workflow.\nUse roman and medium weights for body copy. Reserve lighter weights of a typeface for very large text. Thin fonts are usually the first to break down because of aliasing differences across platforms and screens.\nCheck that the size, leading and length of your type is always legible and readable. Define lower and upper limits. Small text is best left for captions and words in uppercase.\nAvoid overlaying text on images unless it\u2019s guaranteed to be legible. If it\u2019s necessary to optimize space in the layout, give the text a container. Scrims aren\u2019t always reliable: the text will inevitably overlap a part of the photograph without a contrasting ground.\n\nTest your work\n\nReview legibility and contrast on different devices. It\u2019s just as important as testing the layout of a responsive website. If you have a local device lab, pay it a visit.\nFind a computer monitor near a window when the sun is shining. Step outside the studio and try to read your content on a mobile device with different brightness levels. \nAsk your friends and family what they use at home and at work. It\u2019s one way of making sure your feedback isn\u2019t always coming from a closed loop.\n\nPush your limits\n\nYou define what the user sees. If you\u2019ve inherited brand guidelines, question them. If you don\u2019t agree with the choices, make the case for why they should change.\nExperiment with size, weight and color to find contrast. Objects with low contrast appear similar to one another and undermine the visual hierarchy. Weak relationships between figure and ground diminish visual interest. A balanced level of contrast removes ambiguity and creates focal points. It captures and holds our attention.\nIf you\u2019re lost for inspiration, look to graphic design in print. We have a wealth of history, full of examples that excel in using contrast to establish visual hierarchy.\nEmbrace limitations. Use boundaries as an opportunity to explore possibilities.\n\nMore than just a facade\nDesigning with standards encourages legibility and helps to define a strong visual hierarchy. Design without exclusion (through neither negligence or intent) gets around discussions of demographics, speaks to a larger audience and makes good business sense. Following the latest trends not only weakens usability but also hinders a cohesive and distinctive brand.\nUsers will make means when they need to, by increasing browser font sizes or enabling system features for accessibility. But we can do our part to take as much of that burden off of the user and ask less of those who need it most.\nIn architecture, it isn\u2019t buildings that mimic what is fashionable that stand the test of time. Nor do we admire buildings that tack on separate, poorly constructed extensions to meet a bare minimum of safety regulations. We admire architecture that offers well-considered, remarkable, usable spaces with universal access.\nPerhaps we can take inspiration from these spaces. Let\u2019s give our buildings a bold voice and make sure the doors are open to everyone.", "year": "2015", "author": "Mark Mitchell", "author_slug": "markmitchell", "published": "2015-12-13T00:00:00+00:00", "url": "https://24ways.org/2015/designing-with-contrast/", "topic": "design"} {"rowid": 67, "title": "What I Learned about Product Design This Year", "contents": "2015 was a humbling year for me. In September of 2014, I joined a tiny but established startup called SproutVideo as their third employee and first designer. The role interests me because it affords the opportunity to see how design can grow a solid product with a loyal user-base into something even better. \nThe work I do now could also have a real impact on the brand and user experience of our product for years to come, which is a thrilling prospect in an industry where much of what I do feels small and temporary. I got in on the ground floor of something special: a small, dedicated, useful company that cares deeply about making video hosting effortless and rewarding for our users.\nI had (and still have) grand ideas for what thoughtful design can do for a product, and the smaller-scale product design work I\u2019ve done or helped manage over the past few years gave me enough eager confidence to dive in head first. Readers who have experience redesigning complex existing products probably have a knowing smirk on their face right now. As I said, it\u2019s been humbling. A year of focused product design, especially on the scale we are trying to achieve with our small team at SproutVideo, has taught me more than any projects in recent memory. I\u2019d like to share a few of those lessons.\nProduct design is very different from marketing design\nThe majority of my recent work leading up to SproutVideo has been in marketing design. These projects are so fun because their aim is to communicate the value of the product in a compelling and memorable way. In order to achieve this goal, I spent a lot of time thinking about content strategy, responsive design, and how to create striking visuals that tell a story. These are all pursuits I love.\nProduct design is a different beast. When designing a homepage, I can employ powerful imagery, wild gradients, and somewhat-quirky fonts. When I began redesigning the SproutVideo product, I wanted to draw on all the beautiful assets I\u2019ve created for our marketing materials, but big gradients, textures, and display fonts made no sense in this new context.\nThat\u2019s because the product isn\u2019t about us, and it isn\u2019t about telling our story. Product design is about getting out of the way so people can do their job. The visual design is there to create a pleasant atmosphere for people to work in, and to help support the user experience. Learning to take \u201cus\u201d out of the equation took some work after years of creating gorgeous imagery and content for the sales-driven side of businesses.\nI\u2019ve learned it\u2019s very valuable to design both sides of the experience, because marketing and product design flex different muscles. If you\u2019re currently in an environment where the two are separate, consider switching teams in 2016. Designing for product when you\u2019ve mostly done marketing, or vice versa, will deepen your knowledge as a designer overall. You\u2019ll face new unexpected challenges, which is the only way to grow.\nProduct design can not start with what looks good on Dribbble\nI have an embarrassing confession: when I began the redesign, I had a secret goal of making something that would look gorgeous in my portfolio. I have a collection of product shots that I admire on Dribbble; examples of beautiful dashboards and widgets and UI elements that look good enough to frame. I wanted people to feel the same way about the final outcome of our redesign. Mistakenly, this was a factor in my initial work. I opened Photoshop and crafted pixel-perfect static buttons and form elements and color palettes that\u200a\u2014\u200awhen applied to our actual product\u200a\u2014\u200alooked like a toddler beauty pageant. It added up to a lot of unusable shininess, noise, and silliness.\nI was disappointed; these elements seemed so lovely in isolation, but in context, they felt tacky and overblown. I realized: I\u2019m not here to design the world\u2019s most beautiful drop down menu. Good design has nothing to do with ego, but in my experience designers are, at least a little bit, secret divas. I\u2019m no exception. I had to remind myself that I am not working in service of a bigger Dribbble following or to create the most Pinterest-ing work. My function is solely to serve the users\u200a\u2014\u200ato make life a little better for the good people who keep my company in business.\nThis meant letting go of pixel-level beauty to create something bigger and harder: a system of elements that work together in harmony in many contexts. The visual style exists to guide the users. When done well, it becomes a language that users understand, so when they encounter a new feature or have a new goal, they already feel comfortable navigating it. This meant stripping back my gorgeous animated menu into something that didn\u2019t detract from important neighboring content, and could easily fit in other parts of the app. In order to know what visual style would support the users, I had to take a wider view of the product as a whole.\nJust accept that designing a great product \u2013 like many worthwhile pursuits \u2013 is initially laborious and messy\nOnce I realized I couldn\u2019t start by creating the most Dribbble-worthy thing, I knew I\u2019d have to begin with the unglamorous, frustrating, but weirdly wonderful work of mapping out how the product\u2019s content could better be structured. Since we\u2019re redesigning an existing product, I assumed this would be fairly straightforward: the functionality was already in place, and my job was just to structure it in a more easily navigable way.\nI started by handing off a few wireframes of the key screens to the developer, and that\u2019s when the questions began rolling in: \u201cIf we move this content into a modal, how will it affect this similar action here?\u201d \u201cWhat happens if they don\u2019t add video tags, but they do add a description?\u201d \u201cWhat if the user has a title that is 500 characters long?\u201d \u201cWhat if they want their video to be private to some users, but accessible to others?\u201d.\nHow annoying (but really, fantastic) that people use our product in so many ways. Turns out, product design isn\u2019t about laying out elements in the most ideal scenario for the user that\u2019s most convenient for you. As product designers, we have to foresee every outcome, and anticipate every potential user need.\nWhich brings me to another annoying epiphany: if you want to do it well, and account for every user, product design is so much more snarly and tangled than you\u2019d expect going in. I began with a simple goal: to improve the experience on just one of our key product pages. However, every small change impacts every part of the product to some degree, and that impact has to be accounted for. Every decision is based on assumptions that have to be tested; I test my assumptions by observing users, talking to the team, wireframing, and prototyping. Many of my assumptions are wrong. There are days when it\u2019s incredibly frustrating, because an elegant solution for users with one goal will complicate life for users with another goal. It\u2019s vital to solve as many scenarios as possible, even though this is slow, sometimes mind-bending work.\nAs a side bonus, wireframing and prototyping every potential state in a product is tedious, but your developers will thank you for it. It\u2019s not their job to solve what happens when there\u2019s an empty state, error, or edge case. Showing you\u2019ve accounted for these scenarios will win a developer\u2019s respect; failing to do so will frustrate them.\nWhen you\u2019ve created and tested a system that supports user needs, it will be beautiful\nRemember what I said in the beginning about wanting to create a Dribbble-worthy product? When I stopped focusing on the visual details of the design (color, spacing, light and shadow, font choices) and focused instead on structuring the content to maximize usability and delight, a beautiful design began to emerge naturally.\nI began with grayscale, flat wireframes as a strategy to keep me from getting pulled into the visual style before the user experience was established. As I created a system of elements that worked in harmony, the visual style choices became obvious. Some buttons would need to be brighter and sit off the page to help the user spot important actions. Some elements would need line separators to create a hierarchy, where others could stand on their own as an emphasized piece of content. As the user experience took shape, the visual style emerged naturally to support it. The result is a product that feels beautiful to use, because I was thoughtful about the experience first.\n\nA big takeaway from this process has been that my assumptions will often be proven wrong. My assumptions about how to design a great product, and how users will interact with that product, have been tested and revised repeatedly. At SproutVideo we\u2019re about to undertake the biggest test of our work; we\u2019re going to launch a small part of the product redesign to our users. If I\u2019ve learned anything, it\u2019s that I will continue to be humbled by the ongoing effort of making the best product I can, which is a wonderful thing.\nNext year, I hope you all get to do work that takes you out of our comfort zone. Be regularly confounded and embarrassed by your wrong assumptions, learn from them, and come back and tell us what you learned in 2016.", "year": "2015", "author": "Meagan Fisher", "author_slug": "meaganfisher", "published": "2015-12-14T00:00:00+00:00", "url": "https://24ways.org/2015/what-i-learned-about-product-design-this-year/", "topic": "design"} {"rowid": 58, "title": "Beyond the Style Guide", "contents": "Much like baking a Christmas cake, designing for the web involves creating an experience in layers. Starting with a solid base that provides the core experience (the fruit cake), we can add further layers, each adding refinement (the marzipan) and delight (the icing).\nDon\u2019t worry, this isn\u2019t a misplaced cake recipe, but an evaluation of modular design and the role style guides can play in acknowledging these different concerns, be they presentational or programmatic.\nThe auteur\u2019s style guide\nAlthough trained as a graphic designer, it was only when I encountered the immediacy of the web that I felt truly empowered as a designer. Given a desire to control every aspect of the resulting experience, I slowly adopted the role of an auteur, exploring every part of the web stack: front-end to back-end, and everything in between. A few years ago, I dreaded using the command line. Today, the terminal is a permanent feature in my Dock.\nIn straddling the realms of graphic design and programming, it\u2019s the point at which they meet that I find most fascinating, with each dicipline valuing the creation of effective systems, be they for communication or code efficiency. Front-end style guides live at this intersection, demonstrating both the modularity of code and the application of visual design.\nPainting by numbers\nIn our rush to build modular systems, design frameworks have grown in popularity. While enabling quick assembly, these come at the cost of originality and creative expression \u2013 perhaps one reason why we\u2019re seeing the homogenisation of web design.\nIn editorial design, layouts should accentuate content and present it in an engaging manner. Yet on the web we see a practice that seeks templated predictability. In \u2018Design Machines\u2019 Travis Gertz argued that (emphasis added):\n\nDesign systems still feel like a novelty in screen-based design. We nerd out over grid systems and modular scales and obsess over style guides and pattern libraries. We\u2019re pretty good at using them to build repeatable components and site-wide standards, but that\u2019s sort of where it ends. [\u2026] But to stop there is to ignore the true purpose and potential of a design system.\n\nUnless we consider how interface patterns fully embrace the design systems they should be built upon, style guides may exacerbate this paint-by-numbers approach, encouraging conformance and suppressing creativity.\nAnatomy of a button\nLet\u2019s take a look at that most canonical of components, the button, and consider what we might wish to document and demonstrate in a style guide.\nThe different layers of our button component.\nContent\nThe most variable aspect of any component. Content guidelines will exert the most influence here, dictating things like tone of voice (whether we should we use stiff, formal language like \u2018Submit form\u2019, or adopt a more friendly tone, perhaps \u2018Send us your message\u2019) and appropriate language. For an internationalised interface, this may also impact word length and text direction or orientation.\nStructure\nHTML provides a limited vocabulary which we can use to structure content and add meaning. For interactive elements, the choice of element can also affect its behaviour, such as whether a button submits form data or links to another page:\n\nButton text\nNote: One of the reasons I prefer to use