{"rowid": 204, "title": "Cascading Web Design with Feature Queries", "contents": "Feature queries, also known as the @supports rule, were introduced as an extension to the CSS2 as part of the CSS Conditional Rules Module Level 3, which was first published as a working draft in 2011. It is a conditional group rule that tests if the browser\u2019s user agent supports CSS property:value pairs, and arbitrary conjunctions (and), disjunctions (or), and negations (not) of them.\nThe motivation behind this feature was to allow authors to write styles using new features when they were supported but degrade gracefully in browsers where they are not. Even though the nature of CSS already allows for graceful degradation, for example, by ignoring unsupported properties or values without disrupting other styles in the stylesheet, sometimes we need a bit more than that.\nCSS is ultimately a holistic technology, in that, even though you can use properties in isolation, the full power of CSS shines through when used in combination. This is especially evident when it comes to building web layouts. Having native feature detection in CSS makes it much more convenient to build with cutting-edge CSS for the latest browsers while supporting older browsers at the same time.\nBrowser support\nOpera first implemented feature queries in November 2012, both Chrome and Firefox had it since May 2013. There have been several articles about feature queries written over the years, however, it seems that awareness of its broad support isn\u2019t that well-known. Much of the earlier coverage on feature queries was not written in English, and perhaps that was a limiting factor.\n\n@supports \u2015 CSS\u306eFeature Queries by Masataka Yakura, August 8 2012\nNative CSS Feature Detection via the @supports Rule by Chris Mills, December 21 2012\nCSS @supports by David Walsh, April 3 2013\nResponsive typography with CSS Feature Queries by Aral Balkan, April 9 2013\nHow to use the @supports rule in your CSS by Lea Verou, January 31 2014\nCSS Feature Queries by Amit Tal, June 2 2014\nComing Soon: CSS Feature Queries by Adobe Web Platform Team, August 21 2014\nCSS feature queries mittels @supports by Daniel Erlinger, November 27 2014\n\nAs of December 2017, all current major browsers and their previous 2 versions support feature queries. Feature queries are also supported on Opera Mini, UC Browser and Samsung Internet. The only browsers that do not support feature queries are Internet Explorer and Blackberry Mobile, but that may be less of an issue than you might think.\n\n Can I Use css-featurequeries? Data on support for the css-featurequeries feature across the major browsers from caniuse.com.\n\nGranted, there is still a significant number of organisations that require support of Internet Explorer. Microsoft still continues to support IE11 for the life-cycle of Windows 7, 8 and 10. They have, however, stopped supporting older versions since January 12, 2016. It is inevitable that there will be organisations that, for some reason or another, make it mandatory to support IE, but as time goes on, this number will continue to shrink.\nJen Simmons wrote an extensive article called Using Feature Queries in CSS which discussed a matrix of potential situations when it comes to the usage of feature queries. The following image is a summary of the aforementioned matrix.\n\nThe most tricky situation we have to deal with is the box in the top-left corner, which are \u201cbrowsers that don\u2019t support feature queries, yet do support the feature in question\u201d. For cases like those, it really depends on the specific CSS feature you want to use and a subsequent evaluation of the pros and cons of not including that feature in spite of the fact the browser (most likely Internet Explorer) supports it.\nThe basics of feature queries\nAs with any conditional, feature queries operate on boolean logic, in other words, if the query resolves to true, apply the CSS properties within the block, or else just ignore the entire block altogether. The syntax of a simple feature query is as follows:\n.selector {\n /* Styles that are supported in old browsers */\n}\n\n@supports (property:value) {\n .selector {\n /* Styles for browsers that support the specified property */\n }\n}\nNote that the parentheses around the property:value pair are mandatory and the rule is invalid without them. Styles that apply to older browsers, i.e. fallback styles, should come first, followed by the newer properties, which are contained within the @supports conditional. Because of the cascade, fallback styles will be overridden by the newer properties in the modern browsers that support them.\nmain {\n background-color: red;\n}\n\n@supports (display:grid) {\n main {\n background-color: green;\n }\n}\nIn this example, browsers that support CSS grid will have a main element with a green background colour because the conditional resolves to true, while browsers that do not support grid will have a main element with a red background colour.\nThe implication of such behaviour means that we can layer on enhanced styles based on the features we want to use and these styles will show up in browsers that support them. But for those that do not, they will get a more basic look that still works anyway. And that will be our approach moving forward.\nBoolean operators for feature queries\nThe and operator allows us to test for support of multiple properties within a single conditional. This would be useful for cases where the desired output requires multiple cutting-edge features to be supported at the same time to work. All the property:value pairs listed in the conditional must resolve to true for the styles within the rule to be applied.\n@supports (transform: rotate(45deg)) and\n (writing-mode: vertical-rl) {\n /* Some CSS styles */\n}\nThe or operator allows us to list multiple property:value pairs in the conditional and as long as one of them resolves to true, the styles within the block will be applied. A relevant use-case would be for properties with vendor-prefixes.\n@supports (background: -webkit-gradient(linear, left top, left bottom, from(white), to(black))) or\n (background: -o-linear-gradient(top, white, black)) or\n (background: linear-gradient(to bottom, white, black)) {\n /* Some CSS styles */\n}\nThe not operator negates the resolution of the property:value pair in the conditional, resolving to false when the property is supported and vice versa. This is useful when there are two distinct sets of styles to be applied depending on the support of a specific feature. However, we do need to keep in mind the case where a browser does not support feature queries, and handle the styles for those browsers accordingly.\n@supports not (shape-outside: polygon(100% 80%,20% 0,100% 0)) {\n /* Some CSS styles */\n}\nTo avoid confusion between and and or, these operators must be explicitly declared as opposed to using commas or spaces. To prevent confusion caused by precedence rules, and, or and not operators cannot be mixed without a layer of parentheses.\nThis rule is not valid and the styles within the block will be ignored.\n@supports (transition-property: background-color) or\n (animation-name: fade) and\n (transform: scale(1.5)) {\n /* Some CSS styles */\n}\nTo make it work, parentheses must be added either around the two properties adjacent to the or or the and operator like so:\n@supports ((transition-property: background-color) or\n (animation-name: fade)) and\n (transform: scale(1.5)) {\n /* Some CSS styles */\n}\n@supports (transition-property: background-color) or\n ((animation-name: fade) and\n (transform: scale(1.5))) {\n /* Some CSS styles */\n}\nThe current specification states that whitespace is required after a not and on both sides of an and or or, but this may change in a future version of the specification. It is acceptable to add extra parentheses even when they are not needed, but omission of parentheses is considered invalid.\nCascading web design\nI\u2019d like to introduce the concept of cascading web design, an approach made possible with feature queries. Browser update cycles are much shorter these days, so new features and bug fixes are being pushed out a lot more frequently as compared to the early days of the web.\nWith the maturation of web standards, browser behaviour is less unpredictable than before, but each browser will still have their respective quirks. Chances are, the latest features will not ship across all browsers at the same time. But you know what? That\u2019s perfectly fine. If we accept this as a feature of the web, instead of a bug, we\u2019ve just opened up a lot more web design possibilities.\nThe following example is a basic, responsive grid layout of items laid out with flexbox, as viewed on IE11.\n\nWe can add a block of styles within an @supports rule to apply CSS grid properties for browsers that support them to enhance this layout, like so:\n\nThe web is not a static medium. It is dynamic and interactive and we manipulate this medium by writing code to tell the browser what we want it to do. Rather than micromanaging the pixels in our designs, maybe it\u2019s time we cede control of our designs to the browsers that render them. This means being okay with your designs looking different across browsers and devices.\nAs mentioned earlier, CSS works best when various properties are combined. It\u2019s one of those things whose whole is greater than the sum of its parts. So feature queries, when combined with media queries, allow us to design layouts that are most effective in the environment they have to perform in.\nSuch an approach requires interpolative thinking, on multiple levels. As web designers and developers, we don\u2019t just think in one fixed dimension, we get to think about how our design will morph on a narrow screen, or on an older browser, in addition to how it will appear on a browser with the latest features.\nIn the following example, the layout on the left is what IE11 users will see, the one in the middle is what Firefox users will see, because Firefox doesn\u2019t support CSS shapes yet, but once it does, it will then look like the layout on the right, which is what Chrome users see now.\n\nWith the release of CSS Grid this year, we\u2019ve hit another milestone in the evolution of the web as a medium. The beauty of the web is its backwards compatibility and generous fault tolerance. Browser features are largely additive, holding onto the good parts and building on top of them, while deprecating the bits that didn\u2019t work well.\nFeature queries allow us to progressively enhance our CSS, establishing a basic level of user experience across the widest range of browsers, while building in more advanced functionality for browsers who can use them. And hopefully, this will allow more of us to create designs that truly embrace the nature of the web.", "year": "2017", "author": "Chen Hui Jing", "author_slug": "chenhuijing", "published": "2017-12-01T00:00:00+00:00", "url": "https://24ways.org/2017/cascading-web-design/", "topic": "code"} {"rowid": 201, "title": "Lint the Web Forward With Sonarwhal", "contents": "Years ago, when I was in a senior in college, much of my web development courses focused on two things: the basics like HTML and CSS (and boy, do I mean basic), and Adobe Flash. I spent many nights writing ActionScript 3.0 to build interactions for the websites that I would add to my portfolio. A few months after graduating, I built one website in Flash for a client, then never again. Flash was dying, and it became obsolete in my r\u00e9sum\u00e9 and portfolio. \nThat was my first lesson in the speed at which things change in technology, and what a daunting realization that was as a new graduate looking to enter the professional world. Now, seven years later, I work on the Microsoft Edge team where I help design and build a tool that would have lessened my early career anxieties: sonarwhal. \nSonarwhal is a linting tool, built by and for the web community. The code is open source and lives under the JS Foundation. It helps web developers and designers like me keep up with the constant change in technology while simultaneously teaching how to code better websites. \nIntroducing sonarwhal\u2019s mascot Nellie\nGood web development is hard. It is more than HTML, CSS, and JavaScript: developers are expected to have a grasp of accessibility, performance, security, emerging standards, and more, all while refreshing this knowledge every few months as the web evolves. It\u2019s a lot to keep track of.\n\u00a0\nWeb development is hard \nStaying up-to-date on all this knowledge is one of the driving forces for developing this scanning tool. Whether you are just starting out, are a student, or you have over a decade of experience, the sonarwhal team wants to help you build better websites for all browsers. \nCurrently sonarwhal checks for best practices in five categories: Accessibility, Interoperability, Performance, PWAs, and Security. Each check is called a \u201crule\u201d. You can configure them and even create your own rules if you need to follow some specific guidelines for your project (e.g. validate analytics attributes, title format of pages, etc.). \nYou can use sonarwhal in two ways:\n\nAn online version, that provides a quick and easy way to scan any public website.\nA command line tool, if you want more control over the configuration, or want to integrate it into your development flow.\n\nThe Online Scanner\nThe online version offers a streamlined way to scan a website; just enter a URL and you will get a web page of scan results with a permalink that you can share and revisit at any time.\nThe online version of sonarwal\nWhen my team works on a new rule, we spend the bulk of our time carefully researching each subject, finding sources, and documenting it rather than writing the rule\u2019s code. Not only is it important that we get you the right results, but we also want you to understand why something is failing. Next to each failing rule you\u2019ll find a link to its detailed documentation, explaining why you should care about it, what exactly we are testing, examples that pass and examples that don\u2019t, and useful links to even more in-depth documentation if you are interested in the subject.\nWe hope that between reading the documentation and continued use of sonarwhal, developers can stay on top of best practices. As devs continue to build sites and identify recurring issues that appear in their results, they will hopefully start to automatically include those missing elements or fix those pieces of code that are producing errors. This also isn\u2019t a one-way communication: the documentation is not only available on the sonarwhal site, but also on GitHub for editing so you can help us make it even better!\nA results report\nThe current configuration for the online scanner is very strict, so it might hurt your feelings (it did when I first tested it on my personal website). But you can configure sonarwhal to any level of strictness as well as customize the command line tool to your needs! \nSonarwhal\u2019s CLI\u00a0\nThe CLI gives you full control of sonarwhal: what rules to use, tweaks to them, domains that are out of your control, and so on. You will need the latest node LTS (v8) or Stable (v9) and your favorite package manager, such as npm:\nnpm install -g sonarwhal\nYou can now run sonarwhal from anywhere via:\nsonarwhal https://example.com\nUsing the CLI\nThe configuration is done via a .sonarwhalrc file. When analyzing a site, if no file is available, you will be prompted to answer a series of questions:\n\nWhat connector do you want to use? Connectors are what sonarwhal uses to access a website and gather all the information about the requests, resources, HTML, etc. Currently it supports jsdom, Microsoft Edge, and Google Chrome.\nWhat formatter? This is how you want to see the results: summary, stylish, etc. Make sure to look at the full list. Some are concise for, perfect for a quick build assessment, while others are more verbose and informative.\nDo you want to use the recommended rules configuration? Rules are the things we are validating. Unless you\u2019ve read the documentation and know what you are doing, first timers should probably use the recommended configuration.\nWhat browsers are you targeting? One of the best features of sonarwhal is that rules can adapt their feedback depending on your targeted browsers, suggesting to add or remove things. For example, the rule \u201cHighest Document Mode\u201d will tell you to add the \u201cX-UA-Compatible\u201d header if IE10 or lower is targeted or remove if the opposite is true. \n\nsonarwhal configuration generator questions\nOnce you answer all these questions the scan will start and you will have a .sonarwhalrc file similar to the following:\n{\n \"connector\": {\n \"name\": \"jsdom\",\n \"options\": {\n \"waitFor\": 1000\n }\n },\n \"formatters\": \"stylish\",\n \"rulesTimeout\": 120000,\n \"rules\": {\n \"apple-touch-icons\": \"error\",\n \"axe\": \"error\",\n \"content-type\": \"error\",\n \"disown-opener\": \"error\",\n \"highest-available-document-mode\": \"error\",\n \"validate-set-cookie-header\": \"warning\",\n // ...\n }\n}\nYou should see the scan initiate in the command line and within a few seconds the results should start to appear. Remember, the scan results will look different depending on which formatter you selected so try each one out to see which one you like best. \nsonarwhal results on my website and hurting my feelings \ud83d\udc94\nNow that you have a list of errors, you can get to work improving the site! Note though, that when you scan your website, it scans all the resources on that page and if you\u2019ve added something like analytics or fonts hosted elsewhere, you are unable to change those files. You can configure the CLI to ignore files from certain domains so that you are only getting results for files you are in control of.\nThe documentation should give enough guidance on how to fix the errors, but if it\u2019s insufficient, please help us and suggest edits or contribute back to it. This is a community effort and chances are someone else will have the same question as you.\nWhen I scanned both my websites, sonarwhal alerted me to not having an Apple Touch Icon. If I search on the web as opposed to using the sonarwhal documentation, the first top 3 results give me outdated information: I need to include many different icon sizes. I don\u2019t need to include all the different size icons that target different devices. Declaring one icon sized 180px x 180px will provide a large enough icon for devices and it will scale down as appropriate for people on older devices. \nThe information at the top of the search results isn\u2019t always the correct answer to an issue and we don\u2019t want you to have to search through outdated documentation. As sonarwhal\u2019s capabilities expand, the goal is for it to be the one stop shop for helping preflight your website. \nThe journey up until now and looking forward\n\nOn the Microsoft Edge team, we\u2019re passionate about empowering developers to build great websites. Every day we see so many sites come through our issue tracker. (Thanks for filing those bugs, they help us make Microsoft Edge better and better!) Some issues we see over and over are honest mistakes or outdated \u2018best practices\u2019 that could be avoided, so we built this tool to help everyone help make the web a better place.\nWhen we decided to create sonarwhal, we wanted to create a tool that would help developers write better and more up-to-date code for their websites. We want sonarwhal to be useful to anyone so, early on, we defined three guiding principles we\u2019ve used along the way:\n\nCommunity Driven. We build for the community\u2019s best interests. The web belongs to everyone and this project should too. Not only is it open source, we\u2019ve also donated it to the JS Foundation and have an inclusive governance model that welcomes the collaboration of anyone, individual or company.\nUser Centric. We want to put the user at the center, making sonarwhal configurable for your needs and easy to use no matter what your skill level is.\nCollaborative. We didn\u2019t want to reinvent the wheel, so we collaborated with existing tools and services that help developers build for the web. Some examples are aXe, snyk.io, Cloudinary, etc.\n\nThis is just the beginning and we still have lots to do. We\u2019re hard at work on a backlog of exciting features for future releases, such as:\n\nNew rules for a variety of areas like\u00a0performance,\u00a0accessibility,\u00a0security,\u00a0progressive web apps, and more.\nA plug-in for Visual Studio Code: we want sonarwhal to help you write better websites, and what better moment than when you are in your editor.\nConfiguration options for the online service: as we fine tune the infrastructure, the rule configuration for our scanner is locked, but we look forward to adding CLI customization options here in the near future.\n\nThis is a tool for the web community by the web community so if you are excited about sonarwhal, making a better web, and want to contribute, we have a\u00a0few issues where you might be able to help. Also, don\u2019t forget to check the rest of the\u00a0sonarwhal GitHub organization. PRs are always welcome and appreciated! \nLet us know what you think about the scanner at @NarwhalNellie on Twitter and we hope you\u2019ll help us lint the web forward!", "year": "2017", "author": "Stephanie Drescher", "author_slug": "stephaniedrescher", "published": "2017-12-02T00:00:00+00:00", "url": "https://24ways.org/2017/lint-the-web-forward-with-sonarwhal/", "topic": "code"} {"rowid": 193, "title": "Web Content Accessibility Guidelines\u2014for People Who Haven't Read Them", "contents": "I\u2019ve been a huge fan of the Web Content Accessibility Guidelines 2.0 since the World Wide Web Consortium (W3C) published them, nine years ago. I\u2019ve found them practical and future-proof, and I\u2019ve found that they can save a huge amount of time for designers and developers. You can apply them to anything that you can open in a browser. My favourite part is when I use the guidelines to make a website accessible, and then attend user-testing and see someone with a disability easily using that website.\nToday, the United Nations International Day of Persons with Disabilities, seems like a good time to re-read Laura Kalbag\u2019s explanation of why we should bother with accessibility. That should motivate you to devour this article.\nIf you haven\u2019t read the Web Content Accessibility Guidelines 2.0, you might find them a bit off-putting at first. The editors needed to create a single standard that countries around the world could refer to in legislation, and so some of the language in the guidelines reads like legalese. The editors also needed to future-proof the guidelines, and so some terminology\u2014such as \u201ctime-based media\u201d and \u201cprogrammatically determined\u201d\u2014can sound ambiguous. The guidelines can seem lengthy, too: printing the guidelines, the Understanding WCAG 2.0 document, and the Techniques for WCAG 2.0 document would take 1,200 printed pages.\nThis festive season, let\u2019s rip off that legalese and ambiguous terminology like wrapping paper, and see\u2014in a single article\u2014what gifts the Web Content Accessibility Guidelines 2.0 editors have bestowed upon us.\nCan your users perceive the information on your website?\nThe first guideline has criteria that help you prevent your users from asking \u201cWhat the **** is this thing here supposed to be?\u201d\n1.1.1 Text is the most accessible format for information. Screen readers\u2014such as the \u201cVoiceOver\u201d setting on your iPhone or the \u201cTalkBack\u201d app on your Android phone\u2014understand text better than any other format. The same applies for other assistive technology, such as translation apps and Braille displays. So, if you have anything on your webpage that\u2019s not text, you must add some text that gives your user the same information. You probably know how to do this already; for example:\n\nfor images in webpages, put some alternative text in an alt attribute to tell your user what the image conveys to the user;\nfor photos in tweets, add a description to make the images accessible;\nfor Instagram posts, write a caption that conveys the photo\u2019s information.\n\nThe alternative text should allow the user to get the same information as someone who can see the image. For websites that have too many images for someone to add alternative text to, consider how machine learning and Dynamically Generated Alt Text might\u2014might\u2014be appropriate.\nYou can probably think of a few exceptions where providing text to describe an image might not make sense. Remember I described these guidelines as \u201cpractical\u201d? They cover all those exceptions:\n\nUser interface controls such as buttons and text inputs must have names or labels to tell your user what they do.\nIf your webpage has video or audio (more about these later on!), you must\u2014at least\u2014have text to tell the user what they are.\nMaybe your webpage has a test where your user has to answer a question about an image or some audio, and alternative text would give away the answer. In that case, just describe the test in text so your users know what it is.\nIf your webpage features a work of art, tell your user the experience it evokes.\nIf you have to include a Captcha on your webpage\u2014and please avoid Captchas if at all possible, because some users cannot get past them\u2014you must include text to tell your user what it is, and make sure that it doesn\u2019t rely on only one sense, such as vision.\nIf you\u2019ve included something just as decoration, you must make sure that your user\u2019s assistive technology can ignore it. Again, you probably know how to do this. For example, you could use CSS instead of HTML to include decorative images, or you could add an empty alt attribute to the img element. (Please avoid that recent trend where developers add empty alt attributes to all images in a webpage just to make the HTML validate. You\u2019re better than that.)\n\n(Notice that the guidelines allow you to choose how to conform to them, with whatever technology you choose. To make your website conform to a guideline, you can either choose one of the techniques for WCAG 2.0 for that guideline or come up with your own. Choosing a tried-and-tested technique usually saves time!)\n1.2.1 If your website includes a podcast episode, speech, lecture, or any other recorded audio without video, you must include a transcription or some other text to give your user the same information. In a lot of cases, you might find this easier than you expect: professional transcription services can prove relatively inexpensive and fast, and sometimes a speaker or lecturer can provide the speech or lecture notes that they read out word-for-word. Just make sure that all your users can get the same information and the same results, whether they can hear the audio or not. For example, David Smith and Marco Arment always publish episode transcripts for their Under the Radar podcast. \nSimilarly, if your website includes recorded video without audio\u2014such as an animation or a promotional video\u2014you must either use text to detail what happens in the video or include an audio version. Again, this might work out easier then you perhaps fear: for example, you could check to see whether the animation started life as a list of instructions, or whether the promotional video conveys the same information as the \u201cAbout Us\u201d webpage. You want to make sure that all your users can get the same information and the same results, whether they can see that video or not.\n1.2.2 If your website includes recorded videos with audio, you must add captions to those videos for users who can\u2019t hear the audio. Professional transcription services can provide you with time-stamped text in caption formats that YouTube supports, such as .srt and .sbv. You can upload those to YouTube, so captions appear on your videos there. YouTube can auto-generate captions, but the quality varies from impressively accurate to comically inaccurate. If you have a text version of what the people in the video said\u2014such as the speech that a politician read or the bedtime story that an actor read\u2014you can create a transcript file in .txt format, without timestamps. YouTube then creates captions for your video by synchronising that text to the audio in the video. If you host your own videos, you can ask a professional transcription service to give you .vtt files that you can add to a video element\u2019s track element\u2014or you can handcraft your own. (A quick aside: if your website has more videos than you can caption in a reasonable amount of time, prioritise the most popular videos, the most important videos, and the videos most relevant to people with disabilities. Then make sure your users know how to ask you to caption other videos as they encounter them.)\n1.2.3 If your website has recorded videos that have audio, you must add an \u201caudio description\u201d narration to the video to describe important visual details, or add text to the webpage to detail what happens in the video for users who cannot see the videos. (I like to add audio files from videos to my Huffduffer account so that I can listen to them while commuting.) Maybe your home page has a video where someone says, \u201cI\u2019d like to explain our new TPS reports\u201d while \u201cBill Lumbergh, division Vice President of Initech\u201d appears on the bottom of the screen. In that case, you should add an audio description to the video that announces \u201cBill Lumbergh, division Vice President of Initech\u201d, just before Bill starts speaking. As always, you can make life easier for yourself by considering all of your users, before the event: in this example, you could ask the speaker to begin by saying, \u201cI\u2019m Bill Lumbergh, division Vice President of Initech, and I\u2019d like to explain our new TPS reports\u201d\u2014so you won\u2019t need to spend time adding an audio description afterwards. \n1.2.4 If your website has live videos that have some audio, you should get a stenographer to provide real-time captions that you can include with the video. I\u2019ll be honest: this can prove tricky nowadays. The Web Content Accessibility Guidelines 2.0 predate YouTube Live, Instagram live Stories, Periscope, and other such services. If your organisation creates a lot of live videos, you might not have enough resources to provide real-time captions for each one. In that case, if you know the contents of the audio beforehand, publish the contents during the live video\u2014or failing that, publish a transcription as soon as possible.\n1.2.5 Remember what I said about the recorded videos that have audio? If you can choose to either add an audio description or add text to the webpage to detail what happens in the video, you should go with the audio description.\n1.2.6 If your website has recorded videos that include audio information, you could provide a sign language version of the audio information; some people understand sign language better than written language. (You don\u2019t need to caption a video of a sign language version of audio information.)\n1.2.7 If your website has recorded videos that have audio, and you need to add an audio description, but the audio doesn\u2019t have enough pauses for you to add an \u201caudio description\u201d narration, you could provide a separate version of that video where you have added pauses to fit the audio description into.\n1.2.8 Let\u2019s go back to the recorded videos that have audio once more! You could add text to the webpage to detail what happens in the video, so that people who can neither read captions nor hear dialogue and audio description can use braille displays to understand your video.\n1.2.9 If your website has live audio, you could get a stenographer to provide real-time captions. Again, if you know the contents of the audio beforehand, publish the contents during the live audio or publish a transcription as soon as possible.\n(Congratulations on making it this far! I know that seems like a lot to remember, but keep in mind that we\u2019ve covered a complex area: helping your users to understand multimedia information that they can\u2019t see and/or hear. Grab a mince pie to celebrate, and let\u2019s keep going.)\n1.3.1 You must mark up your website\u2019s content so that your user\u2019s browser, and any assistive technology they use, can understand the hierarchy of the information and how each piece of information relates to the rest. Once again, you probably know how to do this: use the most appropriate HTML element for each piece of information. Mark up headings, lists, buttons, radio buttons, checkboxes, and links with the most appropriate HTML element. If you\u2019re looking for something to do to keep you busy this Christmas, scroll through the list of the elements of HTML. Do you notice any elements that you didn\u2019t know, or that you\u2019ve never used? Do you notice any elements that you could use on your current projects, to mark up the content more accurately? Also, revise HTML table advanced features and accessibility, how to structure an HTML form, and how to use the native form widgets\u2014you might be surprised at how much you can do with just HTML! Once you\u2019ve mastered those, you can make your website much more usable for your all of your users.\n1.3.2 If your webpage includes information that your user has to read in a certain order, you must make sure that their browser and assistive technology can present the information in that order. Don\u2019t rely on CSS or whitespace to create that order visually. Check that the order of the information makes sense when CSS and whitespace aren\u2019t formatting it. Also, try using the Tab key to move the focus through the links and form widgets on your webpage. Does the focus go where you expect it to? Keep this in mind when using order in CSS Grid or Flexbox.\n1.3.3 You must not presume that your users can identify sensory characteristics of things on your webpage. Some users can\u2019t tell what you\u2019ve positioned where on the screen. For example, instead of asking your users to \u201cChoose one of the options on the left\u201d, you could ask them to \u201cChoose one of our new products\u201d and link to that section of the webpage.\n1.4.1 You must not rely on colour as the only way to convey something to your users. Some of your users can\u2019t see, and some of your users can\u2019t distinguish between colours. For example, if your webpage uses green to highlight the products that your shop has in stock, you could add some text to identify those products, or you could group them under a sub-heading.\n1.4.2 If your webpage automatically plays a sound for more than 3 seconds, you must make sure your users can stop the sound or change its volume. Don\u2019t rely on your user turning down the volume on their computer; some users need to hear the screen reader on their computer, and some users just want to keep listening to whatever they were listening before your webpage interrupted them!\n1.4.3 You should make sure that your text contrasts enough with its background, so that your users can read it. Bookmark Lea Verou\u2019s Contrast Ratio calculator now. You can enter the text colour and background colour as named colours, or as RGB, RGBa, HSL, or HSLa values. You should make sure that:\n\nnormal text that set at 24px or larger has a ratio of at least 3:1;\nbold text that set at 18.75px or larger has a ratio of at least 3:1;\nall other text has a ratio of at least 4\u00bd:1.\n\nYou don\u2019t have to do this for disabled form controls, decorative stuff, or logos\u2014but you could!\n1.4.4 You should make sure your users can resize the text on your website up to 200% without using their assistive technology\u2014and still access all your content and functionality. You don\u2019t have to do this for subtitles or images of text.\n1.4.5 You should avoid using images of text and just use text instead. In 1998, Jeffrey Veen\u2019s first Hot Design Tip said, \u201cText is text. Graphics are graphics. Don\u2019t confuse them.\u201d Now that you can apply powerful CSS text-styling properties, use CSS Grid to precisely position text, and choose from thousands of web fonts (Jeffrey co-founded Typekit to help with this), you pretty much never need to use images of text. The guidelines say you can use images of text if you let your users specify the font, size, colour, and background of the text in the image of text\u2014but I\u2019ve never seen that on a real website. Also, this doesn\u2019t apply to logos.\n1.4.6 Let\u2019s go back to colour contrast for a second. You could make your text contrast even more with its background, so that even more of your users can read it. To do that, use Lea Verou\u2019s Contrast Ratio calculator to make sure that:\n\nnormal text that is 24px or larger has a ratio of at least 4\u00bd:1;\nbold text that 18.75px or larger has a ratio of at least 4\u00bd:1;\nall other text has a ratio of at least 7:1.\n\n1.4.7 If your website has recorded speech, you could make sure there are no background sounds, or that your users can turn off any background sounds. If that\u2019s not possible, you could make sure that any background sounds that last longer than a couple of seconds are at least four times quieter than the speech. This doesn\u2019t apply to audio Captchas, audio logos, singing, or rapping. (Yes, these guidelines mention rapping!)\n1.4.8 You could make sure that your users can reformat blocks of text on your website so they can read them better. To do this, make sure that your users can:\n\nspecify the colours of the text and the background, and\nmake the blocks of text less than 80-characters wide, and \nalign text to the left (or right for right-to-left languages), and \nset the line height to 150%, and \nset the vertical distance between paragraphs to 1\u00bd times the line height of the text, and \nresize the text (without using their assistive technology) up to 200% and still not have to scroll horizontally to read it.\n\nBy the way, when you specify a colour for text, always specify a colour for its background too. Don\u2019t rely on default background colours!\n1.4.9 Let\u2019s return to images of text for a second. You could make sure that you use them only for decoration and logos.\nCan users operate the controls and links on your website?\nThe second guideline has criteria that help you prevent your users from asking, \u201cHow the **** does this thing work?\u201d\n2.1.1 You must make sure that you users can carry out all of your website\u2019s activities with just their keyboard, without time limits for pressing keys. (This doesn\u2019t apply to drawing or anything else that requires a pointing device such as a mouse.) Again, if you use the most appropriate HTML element for each piece of information and for each form element, this should prove easy.\n2.1.2 You must make sure that when the user uses the keyboard to focus on some part of your website, they can then move the focus to some other part of your webpage without needing to use a mouse or touch the screen. If your website needs them to do something complex before they can move the focus elsewhere, explain that to your user. These \u201ckeyboard traps\u201d have become rare, but beware of forms that move focus from one text box to another as soon as they receive the correct number of characters.\n2.1.3 Let\u2019s revisit making sure that you users can carry out all of your website\u2019s activities with just their keyboard, without time limits for pressing keys. You could make sure that your user can do absolutely everything on your website with just the keyboard.\n2.2.1 Sometimes people need more time than you might expect to complete a task on your website. If any part of your website imposes a time limit on a task, you must do at least one of these: \n\nlet your users turn off the time limit before they encounter it; or\nlet your users increase the time limit to at least 10 times the default time limit before they encounter it; or\nwarn your users before the time limit expires and give them at least 20 seconds to extend it, and let them extend it at least 10 times.\n\nRemember: these guidelines are practical. They allow you to enforce time limits for real-time events such as auctions and ticket sales, where increasing or extending time limits wouldn\u2019t make sense. Also, the guidelines allow you to enforce a maximum time limit of 20 hours. The editors chose 20 hours because people need to go to sleep at some stage. See? Practical!\n2.2.2 In my experience, this criterion remains the least well-known\u2014even though some users can only use websites that conform to it. If your website presents content alongside other content that can distract users by automatically moving, blinking, scrolling, or updating, you must make sure that your users can:\n\npause, stop, or hide the other content if it\u2019s not essential and lasts more than 5 seconds; and\npause, stop, hide, or control the frequency of the other content if it automatically updates.\n\nIt\u2019s OK if your users miss live information such as stock price updates or football scores; you can\u2019t do anything about that! Also, this doesn\u2019t apply to animations such as progress bars that you put on a website to let all users know that the webpage isn\u2019t frozen.\n(If this one sounds complex, just add a pause button to anything that might distract your users.)\n2.2.3 Let\u2019s go back to time limits on tasks on your website. You could make your website even easier to use by removing all time limits except those on real-time events such as auctions and ticket sales. That would mean your user wouldn\u2019t need to interact with a timer at all.\n2.2.4 You could let your users turn off all interruptions\u2014server updates, promotions, and so on\u2014apart from any emergency information.\n2.2.5 This is possibly my favourite of these criteria! After your website logs your user out, you could make sure that when they log in again, they can continue from where they were without having lost any information. Do that, and you\u2019ll be on everyone\u2019s Nice List this Christmas.\n2.3.1 You must make sure that nothing flashes more than three times a second on your website, unless you can make sure that the flashes remain below the acceptable general flash and red flash thresholds\u2026\n2.3.2 \u2026or you could just make sure that nothing flashes more than three times per second on your website. This is usually an easier goal.\n2.4.1 You must make sure that your users can jump past any blocks of content, such as navigation menus, that are repeated throughout your website. You know the drill here: using HTML\u2019s sectioning elements such as header, nav, main, aside, and footer allows users with assistive technology to go straight to the content they need, and adding \u201cSkip Navigation\u201d links allows everyone to get to your main content faster.\n2.4.2 You must add a proper title to describe each webpage\u2019s topic. Your webpage won\u2019t even validate without a title element, so make it a useful one.\n2.4.3 If your users can focus on links and native form widgets, you must make sure that they can focus on elements in an order that makes sense.\n2.4.4 You must make sure that your users can understand the purpose of a link when they read:\n\nthe text of the link; or\nthe text of the paragraph, list item, table cell, or table header for the cell that contains the link; or\nthe heading above the link.\n\nYou don\u2019t have to do that for games and quizzes.\n2.4.5 You should give your users multiple ways to find any webpage within a set of webpages. Add site-wide search and a site map and you\u2019re done!\nThis doesn\u2019t apply for a webpage that is part of a series of actions (like a shopping cart and checkout flow) or to a webpage that is a result of a series of actions (like a webpage confirming that the user has bought what was in the shopping cart).\n2.4.6 You should help your users to understand your content by providing:\n\nheadings that describe the topics of you content;\nlabels that describe the purpose of the native form widgets on the webpage.\n\n2.4.7 You should make sure that users can see which element they have focussed on. Next time you use your website, try hitting the Tab key repeatedly. Does it visually highlight each item as it moves focus to it? If it doesn\u2019t, search your CSS to see whether you\u2019ve applied outline: 0; to all elements\u2014that\u2019s usually the culprit. Use the :focus pseudo-element to define how elements should appear when they have focus.\n2.4.8 You could help your user to understand where the current webpage is located within your website. Add \u201cbreadcrumb navigation\u201d and/or a site map and you\u2019re done.\n2.4.9 You could make links even easier to understand, by making sure that your users can understand the purpose of a link when they read the text of the link. Again, you don\u2019t have to do that for games and quizzes.\n2.4.10 You could use headings to organise your content by topic. \nCan users understand your content?\nThe third guideline has criteria that help you prevent your users from asking, \u201cWhat the **** does this mean?\u201d\n3.1.1 Let\u2019s start this section with the criterion that possibly takes the least time to implement; you must make sure that the user\u2019s browser can identify the main language that your webpage\u2019s content is written in. For a webpage that has mainly English content, use . \n3.1.2 You must specify when content in another language appears in your webpage, like so: I wish you a Joyeux No\u00ebl.. You don\u2019t have to do this for proper names, technical terms, or words that you can\u2019t identify a language for. You also don\u2019t have to do it for words from a different language that people consider part of the language around those words; for example, Come to our Christmas rendezvous! is OK.\n3.1.3 You could make sure that your users can find out the meaning of any unusual words or phrases, including idioms like \u201cstocking filler\u201d or \u201cBah! Humbug!\u201d and jargon such as \u201cVoiceOver\u201d and \u201cTalkBack\u201d. Provide a glossary or link to a dictionary.\n3.1.4 You could make sure that your users can find out the meaning of any abbreviation. For example, VoiceOver pronounces \u201cXmas\u201d as \u201cSmas\u201d instead of \u201cChristmas\u201d. Using the abbr element and linking to a glossary can help. (Interestingly, VoiceOver pronounces \u201cabbr\u201d as \u201cabbreviation\u201d!)\n3.1.5 Do your users need to be able to read better than a typically educated nine-year-old, to read your content (apart from proper names and titles)? If so, you could provide a version that doesn\u2019t require that level of reading ability, or you could provide images, videos, or audio to explain your content. (You don\u2019t have to add captions or audio description to those videos.)\n3.1.6 You could make sure that your users can access the pronunciation of any word in your content if that word\u2019s meaning depends on its pronunciation. For example, the word \u201cclose\u201d could have one of two meanings, depending on pronunciation, in a phrase such as, \u201cReady for Christmas? Close now!\u201d\n3.2.1 Some users need to focus on elements to access information about them. You must make sure that focusing on an element doesn\u2019t trigger any major changes, such as opening a new window, focusing on another element, or submitting a form.\n3.2.2 Webpages are easier for users when the controls do what they\u2019re supposed to do. Unless you have warned your users about it, you must make sure that changing the value of a control such as a text box, checkbox, or drop-down list doesn\u2019t trigger any major changes, such as opening a new window, focusing on another element, or submitting a form.\n3.2.3 To help your users to find the content they want on each webpage, you should put your navigation elements in the same place on each webpage. (This doesn\u2019t apply when your user has changed their preferences or when they use assistive technology to change how your content appears.) \n3.2.4 When a set of webpages includes things that have the same functionality on different webpages, you should name those things consistently. For example, don\u2019t use the word \u201cSearch\u201d for the search box on one webpage and \u201cFind\u201d for the search box on another webpage within that set of webpages.\n3.2.5 Let\u2019s go back to major changes, such as a new window opening, another element taking focus, or a form being submitted. You could make sure that they only happen when users deliberately make them happen, or when you have warned users about them first. For example, you could give the user a button for updating some content instead of automatically updating that content. Also, if a link will open in a new window, you could add the words \u201copens in new window\u201d to the link text.\n3.3.1 Users make mistakes when filling in forms. Your website must identify each mistake to your user, and must describe the mistake to your users in text so that the user can fix it. One way to identify mistakes reliably to your users is to set the aria-invalid attribute to true in the element that has a mistake. That makes sure that users with assistive technology will be alerted about the mistake. Of course, you can then use the [aria-invalid=\"true\"] attribute selector in your CSS to visually highlight any such mistakes. Also, look into how certain attributes of the input element such as required, type, and list can help prevent and highlight mistakes.\n3.3.2 You must include labels or instructions (and possibly examples) in your website\u2019s forms, to help your users to avoid making mistakes. \n3.3.3 When your user makes a mistake when filling in a form, your webpage should suggest ways to fix that mistake, if possible. This doesn\u2019t apply in scenarios where those suggestions could affect the security of the content.\n3.3.4 Whenever your user submits information that:\n\nhas legal or financial consequences; or\naffects information that they have previously saved in your website; or\nis part of a test\n\n\u2026you should make sure that they can:\n\nundo it; or\ncorrect any mistakes, after your webpage checks their information; or\nreview, confirm, and correct the information before they finally submit it.\n\n3.3.5 You could help prevent your users from making mistakes by providing obvious, specific help, such as examples, animations, spell-checking, or extra instructions.\n3.3.6 Whenever your user submits any information, you could make sure that they can:\n\nundo it; or\ncorrect any mistakes, after your webpage checks their information; or\nreview, confirm, and correct the information before they finally submit it.\n\nHave you made your website robust enough to work on your users\u2019 browsers and assistive technologies?\nThe fourth and final guideline has criteria that help you prevent your users from asking, \u201cWhy the **** doesn\u2019t this work on my device?\u201d\n4.1.1 You must make sure that your website works as well as possible with current and future browsers and assistive technology. Prioritise complying with web standards instead of relying on the capabilities of currently popular devices and browsers. Web developers didn\u2019t expect their users to be unwrapping the Wii U Browser five years ago\u2014who knows what browsers and assistive technologies our users will be unwrapping in five years\u2019 time? Avoid hacks, and use the W3C Markup Validation Service to make sure that your HTML has no errors.\n4.1.2 If you develop your own user interface components, you must make their name, role, state, properties, and values available to your user\u2019s browsers and assistive technologies. That should make them almost as accessible as standard HTML elements such as links, buttons, and checkboxes.\n\u201c\u2026and a partridge in a pear tree!\u201d\n\u2026as that very long Christmas song goes. We\u2019ve covered a lot in this article\u2014because your users have a lot of different levels of ability. Hopefully this has demystified the Web Content Accessibility Guidelines 2.0 for you. Hopefully you spotted a few situations that could arise for users on your website, and you now know how to tackle them. \nTo start applying what we\u2019ve covered, you might like to look at Sarah Horton and Whitney Quesenbery\u2019s personas for Accessible UX. Discuss the personas, get into their heads, and think about which aspects of your website might cause problems for them. See if you can apply what we\u2019ve covered today, to help users like them to do what they need to do on your website.\nHow to know when your website is perfectly accessible for everyone\nLOL! There will never be a time when your website becomes perfectly accessible for everyone. Don\u2019t aim for that. Instead, aim for regularly testing and making your website more accessible.\nWeb Content Accessibility Guidelines (WCAG) 2.1\nThe W3C hope to release the Web Content Accessibility Guidelines (WCAG) 2.1 as a \u201crecommendation\u201d (that\u2019s what the W3C call something that we should start using) by the middle of next year. Ten years may seem like a long time to move from version 2.0 to version 2.1, but consider the scale of the task: the editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible. Keep an eye out for 2.1!\nYou\u2019ll go down in history\nOne last point: I\u2019ve met a surprising number of web designers and developers who do great work to make their websites more accessible without ever telling their users about it. Some of your potential customers have possibly tried and failed to use your website in the past. They probably won\u2019t try again unless you let them know that things have improved. A quick Twitter search for your website\u2019s name alongside phrases like \u201cassistive technology\u201d, \u201cdoesn\u2019t work\u201d, or \u201c#fail\u201d can let you find frustrated users\u2014so you can tell them about how you\u2019re making your website more accessible. Start making your websites work better for everyone\u2014and please, let everyone know.", "year": "2017", "author": "Alan Dalton", "author_slug": "alandalton", "published": "2017-12-03T00:00:00+00:00", "url": "https://24ways.org/2017/wcag-for-people-who-havent-read-them/", "topic": "code"} {"rowid": 203, "title": "Jobs-to-Be-Done in Your UX Toolbox", "contents": "Part 1: What is JTBD?\nThe concept of a \u201cjob\u201d in \u201cJobs-To-Be-Done\u201d is neatly encapsulated by a oft-quoted line from Theodore Levitt:\n\n\u201cPeople want a quarter-inch hole, not a quarter inch drill\u201d.\n\nEven so, Don Norman pointed out that perhaps Levitt \u201cstopped too soon\u201d at what the real customer goal might be. In the \u201cThe Design of Everyday Things\u201d, he wrote:\n\n\u201cLevitt\u2019s example of the drill implying that the goal is really a hole is only partially correct, however. When people go to a store to buy a drill, that is not their real goal. But why would anyone want a quarter-inch hole? Clearly that is an intermediate goal. Perhaps they wanted to hang shelves on the wall. Levitt stopped too soon. Once you realize that they don\u2019t really want the drill, you realize that perhaps they don\u2019t really want the hole, either: they want to install their bookshelves. Why not develop methods that don\u2019t require holes? Or perhaps books that don\u2019t require bookshelves.\u201d\n\nIn other words, a \u201cjob\u201d in JTBD lingo is a way to express a user need or provide a customer-centric problem frame that\u2019s independent of a solution. As Tony Ulwick says:\n\n\u201cA job is stable, it doesn\u2019t change over time.\u201d\n\nAn example of a job is \u201ctiding you over from breakfast to lunch.\u201d You could hire a donut, a flapjack or a banana for that mid-morning snack\u2014whatever does the job. If you can arrive at a clearly identified primary job (and likely some secondary ones too), you can be more creative in how you come up with an effective solution while keeping the customer problem in focus.\nThe team at Intercom wrote a book on their application of JTBD. In it, Des Traynor cleverly characterised how JTBD provides a different way to think about solutions that compete for the same job: \n\n\u201cEconomy travel and business travel are both capable candidates applying for [the job: Get me face-to-face with my colleague in San Francisco], though they\u2019re looking for significantly different salaries. Video conferencing isn\u2019t as capable, but is willing to work for a far smaller salary. I\u2019ve a hiring choice to make.\u201d\n\nSo far so good: it\u2019s relatively simple to understand what a job is, once you understand how it\u2019s different from a \u201ctask\u201d. Business consultant and Harvard professor Clay Christensen talks about the concept of \u201chiring\u201d a product to do a \u201cjob\u201d, and firing it when something better comes along. If you\u2019re a company that focuses solutions on the customer job, you\u2019re more likely to succeed. You\u2019ll find these concepts often referred to as \u201cJobs-to-be-Done theory\u201d. But the application of Jobs-to-Be-Done theory is a little more complicated; it comprises several related approaches.\nI particularly like Jim Kalbach\u2019s description of how JTBD is a \u201clens through which to understand value creation\u201d. But it is also more. In my view, it\u2019s a family of frameworks and methods\u2014and perhaps even a philosophy.\nDifferent facets in a family of frameworks\nJTBD has its roots in market research and business strategy, and so it comes to the research table from a slightly different place compared to traditional UX or design research\u2014we have our roots in human-computer interaction and ergonomics. I\u2019ve found it helpful to keep in mind is that the application of JTBD theory is an evolving beast, so it\u2019s common to find contradictions across different resources. My own use of it has varied from project to project. In speaking to others who have adopted it in different measures, it seems that we have all applied it in somewhat multifarious ways. As we like to often say in interviews: there are no wrong answers.\nOutcome Driven Innovation\nTony Ulwick\u2019s version of the JTBD history began with Outcome Driven Innovation (ODI), and this approach is best outlined in his seminal article published in the Harvard Business Review in 2002. To understand his more current JTBD approach in his new book \u201cJobs to Be Done: Theory to Practice\u201d, I actually found it beneficial to read his approach in the original 2002 article for a clearer reference point.\nIn the earlier article, Ulwick presented a rigorous approach that combines interviews, surveys and an \u201copportunity\u201d algorithm\u2014a sequence of steps to determine the business opportunity. ODI centres around working with \u201cdesired outcome statements\u201d that you unearth through interviews, followed by a means to quantify the gap between importance and satisfaction in a survey to different types of customers. \nSince 2008, Ulwick has written about using job maps to make sense of what the customer may be trying to achieve. In a recent article, he describes the aim of the activity is \u201cto discover what the customer is trying to get done at different points in executing a job and what must happen at each juncture in order for the job to be carried out successfully.\u201d\nA job map is not strictly a journey map, however tempting it is to see it that way. From a UX perspective, is one of many models we can use\u2014and as our research team at Clearleft have found, how we use model can depend on the nature of the jobs we\u2019ve uncovered in interviews and the characteristics of the problem we\u2019re attempting to solve.\nFigure 1. Universal job map\nUlwick\u2019s current methodology is outlined in his new book, where he describes a complete end-to-end process: from customer and competitor research to framing market and product strategy.\nThe Jobs-To-Be-Done Interview\nBack in 2013, I attended a workshop by Chris Spiek and Bob Moesta from the ReWired Group on JTBD at the behest of a then-MailChimp colleague, and I came away excited about their approach to product research. It felt different from anything I\u2019d done before and for the first time in years, I felt that I was genuinely adding something new to my research toolbox.\nA key idea is that if you focus on the stories of those who switched to you, and those who switch away from you, you can uncover the core jobs through looking at these opposite ends of engagement.\nThis framework centres around the JTBD interview method, which harnesses the power of a narrative framework to elicit the real reasons why someone \u201chired\u201d something to do a job\u2014be it something physical like a new coffee maker, or a digital service, such as a to-do list app. As you interview, you are trying to unearth the context around the key moments on the JTBD timeline (Figure 2). A common approach is to begin from the point the customer might have purchased something, back to the point where the thought of buying this thing first occurred to them.\nFigure 2. JTBD Timeline\nFigure 3. The Four Forces\nThe Forces Diagram (Figure 3) is a post-interview analysis tool where you can map out what causes customers to switch to something new and what holds them back.\nThe JTBD interview is effective at identifying core and secondary jobs, as well as some context around the user need. Because this method is designed to extract the story from the interviewee, it\u2019s a powerful way to facilitate recall. Having done many such interviews, I\u2019ve noticed one interesting side effect: participants often remember more details later on after the conversation has formally ended. It is worth scheduling a follow-up phone call or keep the channels open.\nStrengths aside, it\u2019s good to keep in mind that the JTBD interview is still primarily an interview technique, so you are relying on the context from the interviewee\u2019s self-reported perspective. For example, a stronger research methodology combines JTBD interviews with contextual research and quantitative methods. \nJob Stories\nAlan Klement is credited for coming up with the term \u201cjob story\u201d to describe the framing of jobs for product design by the team at Intercom:\n\n\u201cWhen \u2026 I want to \u2026 so I can \u2026.\u201d\n\nFigure 4. Anatomy of a Job Story\nUnlike a user story that traditionally frames a requirement around personas, job stories frame the user need based on the situation and context. Paul Adams, the VP of Product at Intercom, wrote:\n\n\u201cWe frame every design problem in a Job, focusing on the triggering event or situation, the motivation and goal, and the intended outcome. [\u2026] We can map this Job to the mission and prioritise it appropriately. It ensures that we are constantly thinking about all four layers of design. We can see what components in our system are part of this Job and the necessary relationships and interactions required to facilitate it. We can design from the top down, moving through outcome, system, interactions, before getting to visual design.\u201d\n\nSystems of Progress\nApart from advocating using job stories, Klement believes that a core tenet of applying JTBD revolves around our desire for \u201cself-betterment\u201d\u2014and that focusing on everyone\u2019s desire for self-betterment is core to a successful strategy.\nIn his book, Klement takes JTBD further to being a tool for change through applying systems thinking. There, he introduces the systems of progress and how it can help focus product strategy approach to be more innovative.\nCoincidentally, I applied similar thinking on mapping systemic change when we were looking to improve users\u2019 trust with a local government forum earlier this year. It\u2019s not just about capturing and satisfying the immediate job-to-be-done, it\u2019s about framing the job so that you can a clear vision forward on how you can help your users improve their lives in the ways they want to.\nThis is really the point where JTBD becomes a philosophy of practice.\nPart 2: Mixing It Up\nThere has been some misunderstanding about how adopting JTBD means ditching personas or some of our existing design tools or research techniques. This couldn\u2019t have been more wrong.\nFigure 5: Jim Kalbach\u2019s JTBD model\nJim Kalbach has used Outcome-Driven Innovation for around 10 years. In a 2016 article, he presents a synthesised model of how to think about that has key elements from ODI, Christensen\u2019s theories and the structure of the job story.\nMore interestingly, Kalbach has also combined the use of mental models with JTBD.\nClaire Menke of UDemy has written a comprehensive article about using personas, JTBD and customer journey maps together in order to communicate more complete story from the users\u2019 perspective. Claire highlights an especially interesting point in her article as she described her challenges:\n\u201cAfter much trial and error, I arrived at a foundational research framework to suit every team\u2019s needs \u2014 allowing everyone to share the same holistic understanding, but extract the type of information most applicable to their work.\u201d\nIn other words, the organisational context you are in likely can dictate what works best\u2014after all the goal is to arrive at the best user experience for your audiences. Intercom can afford to go full-on on applying JTBD theory as a dominant approach because they are a start-up, but a large company or organisation with multiple business units may require a mix of tools, outputs and outcomes.\nJTBD is an immensely powerful approach on many fronts\u2014you\u2019ll find many different references that lists the ways you can apply JTBD. However, in the context of this discussion, it might also be useful to we examine where it lies in our models of how we think about our UX and product processes.\nJTBD in the UX ecosystem\nFigure 6. The Elements of User Experience (source)\nThere are many ways we have tried to explain the UX discipline but I think Jesse James Garrett\u2019s Elements of User Experience is a good place to begin.\nI sometimes also use little diagram to help me describe the different levels you might work at when you work through the complexity of designing and developing a product. A holistic UX strategy needs to address all the different levels for a comprehensive experience: your individual product UI, product features, product propositions and brand need to have a cohesive definition.\nFigure 7. Which level of product focus?\nWe could, of course, also think about where it fits best within the double diamond.\nAgain, bearing in mind that JTBD has its roots in business strategy and market research, it is excellent at clarifying user needs, defining high-level specifications and content requirements. It is excellent for validating brand perception and value proposition \u2014all the way down to your feature set. In other words, it can be extremely powerful all the way through to halfway of the second diamond. You could quite readily combine the different JTBD approaches because they have differences as much as overlaps. However, JTBD generally starts getting a little difficult to apply once we get to the details of UI design.\nThe clue lies in JTBD\u2019s raison d\u2019\u00eatre: a job statement is solution independent. Hence, once we get to designing solutions, we potentially fall into a existential black hole.\nThat said, Jim Kalbach has a quick case study on applying JTBD to content design tucked inside the main article on a synthesised JTBD model. Alan Klement has a great example of how you could use UI to resolve job stories. You\u2019ll notice that the available language of \u201cjobs\u201d drops off at around that point.\nJob statements and outcome statements provide excellent \u201cmini north-stars\u201d as customer-oriented focal points, but purely satisfying these statements would not necessarily guarantee that you have created a seamless and painless user experience.\nPlaying well with others\nYou will find that JTBD plays well with Lean, and other strategy tools like the Value Proposition Canvas. With every new project, there is potential to harness the power of JTBD alongside our established toolbox.\nWhen we need to understand complex contexts where cultural or socioeconomic considerations have to be taken into account, we are better placed with combining JTBD with more anthropological approaches. And while we might be able to evaluate if our product, website or app satisfies the customer jobs through interviews or surveys, without good old-fashioned usability testing we are unlikely to be able to truly validate why the job isn\u2019t being represented as it should. In this case, individual jobs solved on the UI can be set up as hypotheses to be proven right or wrong. \nThe application of Jobs-to-be-Done is still evolving. I\u2019ve found it to be very powerful and I struggle to remember what my UX professional life was like before I encountered it\u2014it has completely changed my approach to research and design.\nThe fact JTBD is still evolving as a practice means we need to be watchful of dogma\u2014there\u2019s no right way to get a UX job done after all, it nearly always depends. At the end of the day, isn\u2019t it about having the right tool for the right job?", "year": "2017", "author": "Steph Troeth", "author_slug": "stephtroeth", "published": "2017-12-04T00:00:00+00:00", "url": "https://24ways.org/2017/jobs-to-be-done-in-your-ux-toolbox/", "topic": "ux"} {"rowid": 195, "title": "Levelling Up for Junior Developers", "contents": "If you are a junior developer starting out in the web industry, things can often seem a little daunting. There are so many things to learn, and as soon as you\u2019ve learnt one framework or tool, there seems to be something new out there.\nI am lucky enough to lead a team of developers building applications for the web. During a recent One to One meeting with one of our junior developers, he asked me about a learning path and the basic fundamentals that every developer should know. After a bit of digging around, I managed to come up with a (not so exhaustive) list of principles that was shared with him.\n\nIn this article, I will share the list with you, and hopefully help you level up from junior developer and become a better developer all round. This list doesn\u2019t focus on an particular programming language, but rather coding concepts as a whole. The idea behind this list is that whether you are a front-end developer, back-end developer, full stack developer or just a curious one, these principles apply to everyone that writes code. \nI have tried to be technology agnostic, so that you can use these tips to guide you, whatever your tech stack might be.\nWithout any further ado and in no particular order, let\u2019s get started.\nRefactoring code like a boss\nThe Boy Scouts have a rule that goes \u201calways leave the campground cleaner than you found it.\u201d This rule can be applied to code too and ensures that you leave code cleaner than you found it. As a junior developer, it\u2019s almost certain that you will either create or come across older code that could be improved. The resources below are a guide that will help point you in the right direction.\n\nMy favourite book on this subject has to be Clean Code by Robert C. Martin. It\u2019s a must read for anyone writing code as it helps you identify bad code and shows you techniques that you can use to improve existing code.\nIf you find that in your day to day work you deal with a lot of legacy code, Improving Existing Technology through Refactoring is another useful read.\nDesign Patterns are a general repeatable solution to a commonly occurring problem in software design. My friend and colleague Ranj Abass likes to refer to them as a \u201ccommon language\u201d that helps developers discuss the way that we write code as a pattern. My favourite book on this subject is Head First Design Patterns which goes right back to the basics. Another great read on this topic is Refactoring to Patterns.\nWorking Effectively With Legacy Code is another one that I found really valuable.\n\nImproving your debugging skills\nA solid understanding of how to debug code is a must for any developer. Whether you write code for the web or purely back-end code, the ability to debug will save you time and help you really understand what is going on under the hood.\n\nIf you write front-end code for the web, one of my favourite resources to help you understand how to debug code in Chrome can be found on the Chrome Dev Tools website. While some of the tips are specific to Chrome, these techniques apply to any modern browser of your choice.\nAt Settled, we use Node.js for much of our server side code. Without a doubt, our most trusted IDE has to be Visual Studio Code and the built-in debuggers are amazing. Regardless of whether you use Node.js or not, there are a number of plugins and debuggers that you can use in the IDE. I recommend reading the website of your favourite IDE for more information. \nAs a side note, it is worth mentioning that Chrome Developer Tools actually has functionality that allows you to debug Node.js code too. This makes it a seamless transition from front-end code to server-side code debugging.\nThe Debugging Mindset is an informative online article by Devon H. O\u2019Dell and discusses the the psychology of learning strategies that lead to effective problem-solving skills. \n\nA good understanding of relational databases and NoSQL databases\nAlmost all developers will need to persist data at some point in their career. Even if you don\u2019t write SQL queries in your day to day job, a solid understanding of how they work will help you become a better developer.\n\nIf you are a complete newbie when it comes to databases, I recommend checking out Code Academy. They offer a free online course that can help you get your head around how relational databases work. The course is quite basic, but is a useful hands-on approach to learning this topic.\nThis article provides a great explainer for the difference between the SQL and NoSQL databases, and this Stackoverflow answer goes a little deeper into the subject of the two database types.\nIf you\u2019d like to learn more about NoSQL queries, I would recommend starting with this article on MongoDB queries. Unfortunately, there isn\u2019t one overall course as most NoSQL databases have their own syntax. \n\nYou may also have noticed that I haven\u2019t included other types of databases such as Graph or In-memory; it\u2019s worth focussing on the basics before going any deeper.\nPerformance on the web\nIf you build for the web today, it is important to understand how the browser receives and renders the content that you send it. I am pretty passionate about Web Performance, and hope that everyone can learn how to make websites faster and more efficient. It can be fun at the same time!\n\nSteve Souders High Performance Websites is the godfather of web performance books. While it was created a few years ago and many of the techniques might have changed slightly, it is the original book on the subject and set up many of the ground rules that we know about web performance today.\nA free online resource on this topic is the Google Developers website. The site is an up to date guide on the best web performance techniques for your site. It is definitely worth a read.\nThe network plays a key role in delivering data to your users, and it plays a big role in performance on the web. A fantastic book on this topic is Ilya Grigorik\u2019s High Performance Browser Networking. It is also available to read online at hpbn.co.\n\nUnderstand the end to end architecture of your software project\nI find that one of the best ways to improve my knowledge is to learn about the architecture of the software at the company I work at. It gives you a good understanding as to why things are designed the way they are, why certain decisions were made, and gives you an understanding of how you might do things differently with hindsight.\nTry and find someone more senior, such as a Technical Lead or Software Architect, at your company and ask them to explain the overall architecture and draw a few high-level diagrams for you. Not to mention that they will be impressed with your willingness to learn.\n\nI recommend reading Clean Architecture: A Craftsman\u2019s Guide to Software Structure and Design for more detail on this subject.\nFar too often, software projects can be over-engineered and over-architected, it is worth reading Just Enough Software Architecture. The book helps developers understand how the smallest of changes can affect the outcome of your software architecture.\n\nHow are things deployed\nA big part of creating software is actually shipping it! How is the software at your company released into the wild? Does your company do Continuous Integration? Continuous Deployment?\n\nEven if you answered no to any of these questions, it is worth finding someone with the knowledge in your company to explain these things to you. If it is not already documented, perhaps you could start a wiki to document everything you\u2019re learning about the system - this is a great way to level up and be appreciated and invaluable.\nA streamlined deployment process is a beautiful thing, and understanding how they work can help you grow your knowledge as a developer. \nContinuous Integration is a practical read on the ins and outs of implementing this deployment technique.\nDocker is another great tool to use when it comes to software deployment. It can be tricky at first to wrap your head around, but it is definitely worth learning about this great technology. The documentation on the website will teach and guide you on how to get started using Docker.\n\nWriting Tests\nTesting is an essential tool in the developer bag of skills. They help you to make big refactoring changes to your code, and feel a lot more confident knowing that your changes haven\u2019t broken anything. There are so many benefits to testing, which make it so important for developers at every level to become acquainted with it/them.\n\nThe book that started it all for me was Roy Osherove\u2019s The Art of Unit Testing. The code in the book is written in C#, but the principles apply to every language. It\u2019s a great, easy-to-understand read.\nAnother great read is How Google Tests Software and covers exactly what it says on the tin. It covers many different testing techniques such as exploratory, black box, white box, and acceptance testing and really helps you understand how large organisations test their code.\n\nSoft skills\nWhilst reading through this article, you\u2019ve probably noticed that a large chunk of it focusses on code and technical ability. Without a doubt, I\u2019d say that it is even more important to be a good teammate. If you look up the definition of soft skills in the dictionary, it is defined as \u201cpersonal attributes that enable someone to interact effectively and harmoniously with other people\u201d and I think that it sums this up perfectly. Working on your \u201csoft skills\u201d is something that can truly help you level up in your career. You may be the world\u2019s greatest coder, but if you colleagues can\u2019t get along with you, your coding skills won\u2019t matter!\nWhile you may not learn how to become the perfect co-worker overnight, I really try and live by the motto \u201cdon\u2019t be an arsehole\u201d. Think about how you like to be treated and then try and treat your co-workers with the same courtesy and respect. The next time you need to make a decision at work, ask yourself \u201cis this something an arsehole would do\u201d? If you answered yes to that question, you probably shouldn\u2019t do it!\nSummary\nLevelling up as a junior developer doesn\u2019t have to be scary. Focus on the fundamentals and they should hold you in good stead, regardless of the new things that come along. Software engineering is built on these great principles that have stood the test of time.\nWhilst researching for this article, I came across a useful Github repo that is worth mentioning. Things Every Programmer Should Know is packed with useful information. I have to admit, I didn\u2019t know everything on there!\nI hope that you have found this list helpful. Some of the topics I have mentioned might not be relevant for you at this stage in your career, but should give a nudge in the right direction. After all, knowledge is power!\nIf you are a junior developer reading this article, what would you add to it?", "year": "2017", "author": "Dean Hume", "author_slug": "deanhume", "published": "2017-12-05T00:00:00+00:00", "url": "https://24ways.org/2017/levelling-up-for-junior-developers/", "topic": "code"} {"rowid": 196, "title": "Designing a Remote Project", "contents": "I came across an article recently, which I have to admit made my blood boil a little. Yes, I know it\u2019s the season of goodwill and all that, and I\u2019m going to risk sounding a little Scrooge-like, but I couldn\u2019t help it. It was written by someone who\u2019d tried out \u2018telecommuting\u2019 (big sigh) a.k.a. remote or distributed working. They\u2019d tested it in their company and decided it didn\u2019t work. \nWhy did it enrage me so much? Well, this person sounded like they\u2019d almost set it up to fail. To them, it was the latest buzzword, and they wanted to offer their employees a \u2018perk\u2019. But it was going to be risky, because, well, they just couldn\u2019t trust their employees not to be lazy and sit around in their pyjamas at home, watching TV, occasionally flicking their mousepad to \u2018appear online\u2019. Sounds about right, doesn\u2019t it?\nWell, no. This attitude towards remote working is baked in the past, where working from one office and people all sitting around together in a cosy circle singing kum-by-yah* was a necessity not an option. We all know the reasons remote working and flexibility can happen more easily now: fast internet, numerous communication channels, and so on. But why are companies like Yahoo! and IBM backtracking on this? Why is there still such a negative perception of this way of working when it has so much real potential for the future?\n*this might not have ever really happened in an office.\nSo what is remote working? It can come in various formats. It\u2019s actually not just the typical office worker, working from home on a specific day. The nature of digital projects has been changing over a number of years. In this era where organisations are squeezing budgets and trying to find the best value wherever they can, it seems that the days of whole projects being tackled by one team, in the same place, is fast becoming the past. What I\u2019ve noticed more recently is a much more fragmented way of putting together a project \u2013 a mixture of in-house and agency, or multiple agencies or organisations, or working with an offshore team. In the past we might have done the full integrated project from beginning to end, now, it\u2019s a piece of the pie. \nWhich means that everyone is having to work with people who aren\u2019t sat next to them even more than before. Whether that\u2019s a freelancer you\u2019re working with who\u2019s not in the office, an offshore agency doing development or a partner company in another city tackling UX\u2026 the future is looking more and more like a distributed workplace.\nSo why the negativity, man?\nAs I\u2019ve seen from this article, and from examples of large corporations changing their entire philosophy away from remote working, there\u2019s a lot of negativity towards this way of working. Of course if you decide to let everyone work from home when they want, set them off and then expect them all to check in at the right time or be available 24/7 it\u2019s going to be a bit of a mess. Equally if you just jump into work with a team on the other side of the world without any setup, should you expect anything less than a problematic project?\nOkay, okay so what about these people who are going to sit on Facebook all day if we let them work from home? It\u2019s the age old response to the idea of working from home. I can\u2019t see the person, so how do I know what they are doing?\nThis comes up regularly as one of the biggest fears of letting people work remotely. There\u2019s also the perceived lack of productivity and distractions at home. The limited collaboration and communication with distributed workers. The lack of availability. The lower response times. \nHang on a second, can\u2019t these all still be problems even if you\u2019ve got your whole team sat in the same place? \u201cThey won\u2019t focus on work.\u201d How many people will go on Facebook or Twitter whilst sat in an office? \u201cThey won\u2019t collaborate as much.\u201d How many people sit in the office with headphones on to block out distractions? I think we have to move away from the idea that being sat next to people automatically makes them work harder. If the work is satisfying, challenging, and relevant to a person \u2013 surely we should trust them to do it, wherever they are sat?\nThere\u2019s actually a lot of benefits to remote working, and having distributed teams. Offering this as a way of working can attract and retain employees, due to the improved flexibility. There can actually be fewer distractions and disruptions at home, which leads to increased productivity. To paraphrase Jason Fried in his talk \u2018Why work doesn\u2019t happen at work\u2019, at home there are voluntary distractions where you have to choose to distract yourself with something. At the office these distractions become involuntary. Impromptu meetings and people coming to talk to you all the time are actually a lot more disruptive. Often, people find it easier to focus away from the office environment. \nThere\u2019s also the big benefit for a lot of people of the time saved commuting. The employee can actually do a lot that\u2019s beneficial to them in this time, rather than standing squeezed into people\u2019s armpits on public transport. Hence increased job satisfaction. With a distributed team, say if you\u2019re working with an off-shore team, there could be a wider range of talent to pick from and it also encourages diversity. There can be a wider range of cultural differences and opinions brought to a project, which encourages more diverse ways of thinking.\nTackling the issues - or, how to set up a project with a remote team\nBut that isn\u2019t to say running projects with a distributed team or being a remote worker is easy, and can just happen, like that. It needs work \u2013 and good groundwork \u2013 to ensure you don\u2019t set it up to fail. So how do you help create a smoother remote project?\nStart with trust\nFirst of all, the basis of the team needs to be trust. Yes I\u2019m going to sound a little like a cheesy, self-help guru here (perhaps in an attempt to seem less Scrooge-like and inject some Christmas cheer) but you do need to trust the people working remotely as well as them trusting you. This extends to a distributed team. You can\u2019t just tell the offshore team what to do, and micromanage them, scared they won\u2019t do what you want, how you want it because you can\u2019t see them. You need to give them ownership and let them manage the tasks. Remember, people are less likely to criticise their own work. Make them own the work and they are more likely to be engaged and productive.\nSet a structure\nDistributed teams and remote workers can fail when there is no structure \u2013 just as much as teams sitting together fail without it too. It\u2019s not so much setting rules, as having a framework to work within. Eliminate blockers before they happen. Think about what could cause issues for the team, and think of ways to solve this. For example, what do you do if you won\u2019t be able to get hold of someone for a few hours because of a time difference? Put together a contingency, e.g. is there someone else on your time zone you could go to with queries after assessing the priority? Would it be put aside until that person is back in? Define team roles and responsibilities clearly. Sit down at the beginning of the project and clearly set out expectations. Also ask the team, what are their expectations of you?\nThere won\u2019t be a one size fits all framework either. Think about your team, the people in it, the type of project you\u2019re working with, the type of client and stakeholder. This should give you an idea of what sort of communications you\u2019ll need on the project. Daily calls, video calls, Slack channels, the choice is yours.\nDecide on the tools\nTo be honest, I could spend hours talking about the different tools you can use for communication. But you know them, right? And in the end it\u2019s not the tool that\u2019s important here - it\u2019s the communication that\u2019s being done on the tool. Tools need to match the type of communications needed for your team. One caveat here though, never rely solely on email! Emails are silos, and can become beasts to manage communications on.\nTransparency in communication\nGood communication is key. Make sure there are clear objectives for communication. Set up one time during the week where those people meet together, discuss all the work during that week that they\u2019ve done. If decisions are made between team members who are together, make sure everyone knows what these are. But try to make collective decisions where you can, when it doesn\u2019t impact on people\u2019s time.\nHave a face-to-face kick off\nYes, I know this might seem to counter my argument, but face-to-face comms are still really important. If it\u2019s feasible, have an in-person meeting to kick off your project, and to kick off your team working together. An initial meeting, to break the ice, discuss ways of working, set the goals, can go a long way to making working with distributed teams successful. If this is really not viable, then hold a video call with the team. Try to make this a little more informal. I know, I know, not the dreaded cringey icebreakers\u2026 but something to make everyone relax and get to know each other is really important. Bring everybody together physically on a regular basis if you can, for example with quarterly meetings. You\u2019ve got to really make sure people still feel part of a team, and it often takes a little more work with a remote team. Connect with new team members, one-on-one first, then you can have more of a \u2018remote\u2019 relationship. \nGet visual\nVisual communication is often a lot better tool to use than just a written sentence, and can help bring ideas to life. Encourage people to sketch things, take a photo and add this to your written communications. Or use a mockup tool to sketch ideas.\nBut what about Agile projects?\nThe whole premise of Agile projects is to have face-to-face contact I hear you cry. The Agile Manifesto itself states \u201cThe most efficient and effective method of conveying information to and within a development team is face-to-face conversation\u201d. However, this doesn\u2019t mean the death of remote working. In fact loads of successful companies still run Agile projects, whilst having a distributed team. With all the collaborative tools you can use for centralising code, tracking tasks, visualising products, it\u2019s not difficult to still communicate in a way that works. Just think about how to replicate the principles of Agile remotely - working together daily, a supportive environment, trust, and simplicity. How can you translate these to your remote or distributed team? \nOne last thought to leave you with before you run off to eat your mince pies (in your pyjamas, whilst working). A common mistake in working with a remote project team or working remotely yourself, is replacing distance with time. If you\u2019re away from the office you think you need to always be \u2018on\u2019 \u2013 messaging, being online, replying to requests. If you have a distributed team, you might think a lot of meetings, calls, and messages will be good to foster communication. But don\u2019t overload these meetings, calls, and communication. This can be disruptive in itself. Give people the gift of some uninterrupted time to actually do some work, and not feel like they have to check in every second.", "year": "2017", "author": "Suzanna Haworth", "author_slug": "suzannahaworth", "published": "2017-12-06T00:00:00+00:00", "url": "https://24ways.org/2017/designing-a-remote-project/", "topic": "business"} {"rowid": 211, "title": "Automating Your Accessibility Tests", "contents": "Accessibility is one of those things we all wish we were better at. It can lead to a bunch of questions like: how do we make our site better? How do we test what we have done? Should we spend time each day going through our site to check everything by hand? Or just hope that everyone on our team has remembered to check their changes are accessible?\nThis is where automated accessibility tests can come in. We can set up automated tests and have them run whenever someone makes a pull request, and even alongside end-to-end tests, too.\nAutomated tests can\u2019t cover everything however; only 20 to 50% of accessibility issues can be detected automatically. For example, we can\u2019t yet automate the comparison of an alt attribute with an image\u2019s content, and there are some screen reader tests that need to be carried out by hand too. To ensure our site is as accessible as possible, we will still need to carry out manual tests, and I will cover these later.\nFirst, I\u2019m going to explain how I implemented automated accessibility tests on Elsevier\u2019s ecommerce pages, and share some of the lessons I learnt along the way.\nPicking the right tool\nOne of the hardest, but most important parts of creating our automated accessibility tests was choosing the right tool.\nWe began by investigating aXe CLI, but soon realised it wouldn\u2019t fit our requirements. It couldn\u2019t check pages that required a visitor to log in, so while we could test our product pages, we couldn\u2019t test any customer account pages. Instead we moved over to Pa11y. Its beforeScript step meant we could log into the site and test pages such as the order history. \nThe example below shows the how the beforeScript step completes a login form and then waits for the login to complete before testing the page:\nbeforeScript: function(page, options, next) {\n // An example function that can be used to make sure changes have been confirmed before continuing to run Pa11y\n function waitUntil(condition, retries, waitOver) {\n page.evaluate(condition, function(err, result) {\n if (result || retries < 1) {\n // Once the changes have taken place continue with Pa11y testing\n waitOver();\n } else {\n retries -= 1;\n setTimeout(function() {\n waitUntil(condition, retries, waitOver);\n }, 200);\n }\n });\n }\n\n // The script to manipulate the page must be run with page.evaluate to be run within the context of the page\n page.evaluate(function() {\n const user = document.querySelector('#login-form input[name=\"email\"]');\n const password = document.querySelector('#login-form input[name=\"password\"]');\n const submit = document.querySelector('#login-form input[name=\"submit\"]');\n user.value = 'user@example.com';\n password.value = 'password';\n submit.click();\n }, function() {\n // Use the waitUntil function to set the condition, number of retries and the callback\n waitUntil(function() {\n return window.location.href === 'https://example.com';\n }, 20, next);\n });\n}\nThe waitUntil callback allows the test to be delayed until our test user is successfully logged in.\nAnother thing to consider when picking a tool is the type of error messages it produces. aXe groups all elements with the same error together, so the list of issues is a lot easier to read, and it\u2019s easier to identify the most commons problems. For example, here are some elements that have insufficient colour contrast:\nViolation of \"color-contrast\" with 8 occurrences!\nEnsures the contrast between foreground and background colors meets\nWCAG 2 AA contrast ratio thresholds. Correct invalid elements at:\n - #maincontent > .make_your_mark > div:nth-child(2) > p > span > span\n - #maincontent > .make_your_mark > div:nth-child(4) > p > span > span\n - #maincontent > .inform_your_decisions > div:nth-child(2) > p > span > span\n - #maincontent > .inform_your_decisions > div:nth-child(4) > p > span > span\n - #maincontent > .inform_your_decisions > div:nth-child(6) > p > span > span\n - #maincontent > .inform_your_decisions > div:nth-child(8) > p > span > span\n - #maincontent > .inform_your_decisions > div:nth-child(10) > p > span > span\n - #maincontent > .inform_your_decisions > div:nth-child(12) > p > span > span\nFor details, see: https://dequeuniversity.com/rules/axe/2.5/color-contrast\naXe also provides links to their site where they discuss the best way to fix the problem.\nIn comparison, Pa11y lists each individual error which can lead to a very verbose list. However, it does provide helpful suggestions of how to fix problems, such as suggesting an alternative shade of a colour to use:\n\u2022 Error: This element has insufficient contrast at this conformance level.\n Expected a contrast ratio of at least 4.5:1, but text in this element has a contrast ratio of 2.96:1.\n Recommendation: change text colour to #767676.\n \u23a3 WCAG2AA.Principle1.Guideline1_4.1_4_3.G18.Fail\n \u23a3 #maincontent > div:nth-child(10) > div:nth-child(8) > p > span > span\n \u23a3 Featured products:\nIntegrating into our build pipeline\nWe decided the perfect time to run our accessibility tests would be alongside our end-to-end tests. We have a Jenkins job that detects changes to our staging site and then triggers the end-to-end tests, and in turn our accessibility tests. Our Jenkins job retrieves the contents of a GitHub repository containing our Pa11y script file and npm package manifest.\nOnce Jenkins has cloned the repository, it installs any dependencies and executes the tests via:\nnpm install && npm test\nBundling the URLs to be tested into our test script means we don\u2019t have a command line style test where we list each URL we wish to test in the Jenkins CLI. It\u2019s an effective method but can also be cluttered, and obscure which URLs are being tested.\nIn the middle of the office we have a monitor displaying a Jenkins dashboard and from this we can see if the accessibility tests are passing or failing. Everyone in the team has access to the Jenkins logs and when the build fails they can see why and fix the issue.\nFixing the issues\nAs mentioned earlier, Pa11y can generate a long list of areas for improvement which can be very verbose and quite overwhelming. I recommend going through the list to see which issues occur most frequently and fix those first. For example, we initially had a lot of errors around colour contrast, and one shade of grey in particular. By making this colour darker, the number of errors decreased, and we could focus on the remaining issues.\nAnother thing I like to do is to tackle the quick fixes, such as adding alt text to images. These are small things that allow us to make an impact instantly, giving us time to fix more detailed concerns such as addressing tabindex issues, or speaking to our designers about changing the contrast of elements on the site.\nManual testing\nAdding automated tests to check our site for accessibility is great, but as I mentioned earlier, this can only cover 20-50% of potential issues. To improve on this, we need to test by hand too, either by ourselves or by asking others.\nOne way we can test our site is to throw our mouse or trackpad away and interact with the site using only a keyboard. This allows us to check items such as tab order, and ensure menu items, buttons etc. can be used without a mouse. The commands may be different on different operating systems, but there are some great guides online for learning more about these. \nIt\u2019s tempting to add alt text and aria-labels to make errors go away, but if they don\u2019t make any sense, what use are they really? Using a screenreader we can check that alt text accurately represents the image. This is also a great way to double check that our ARIA roles make sense, and that they correctly identify elements and how to interact with them. When testing our site with screen readers, it\u2019s important to remember that not all screen readers are the same and some may interact with our site differently to others. \nConsider asking a range of people with different needs and abilities to test your site, too. People experience the web in numerous ways, be they permanent, temporary or even situational. They may interact with your site in ways you hadn\u2019t even thought about, so this is a good way to broaden your knowledge and awareness.\nTips and tricks\nOne of our main issues with Pa11y is that it may find issues we don\u2019t have the power to solve. A perfect example of this is the one pixel image Facebook injects into our site. So, we wrote a small function to go though such errors and ignore the ones that we cannot fix.\nconst test = pa11y({\n ....\n hideElements: '#ratings, #js-bigsearch',\n ...\n});\n\nconst ignoreErrors: string[] = [\n '',\n '',\n ''\n ];\n const filterResult = result => {\n if (ignoreErrors.indexOf(result.context) > -1) {\n return false;\n }\n return true;\n };\nInitially we wanted to focus on fixing the major problems, so we added a rule to ignore notices and warnings. This made the list or errors much smaller and allowed us focus on fixing major issues such as colour contrast and missing alt text. The ignored notices and warnings can be added in later after these larger issues have been resolved. \nconst test = pa11y({\n ignore: [\n 'notice',\n 'warning'\n ],\n...\n});\nJenkins gotchas\nWhile using Jenkins we encountered a few problems. Sometimes Jenkins would indicate a build had passed when in reality it had failed. This was because Pa11y had timed out due to PhantomJS throwing an error, or the test didn\u2019t go past the first URL. Pa11y has recently released a new beta version that uses headless Chrome instead of PhantomJS, so hopefully these issues will less occur less often. \nWe tried a few approaches to solve these issues. First we added error handling, iterating over the array of test URLs so that if an unexpected error happened, we could catch it and exit the process with an error indicating that the job had failed (using process.exit(1)). \nfor (const url of urls) {\n try {\n console.log(url);\n let urlResult = await run(url);\n urlResult = urlResult.filter(filterResult);\n urlResult.forEach(result => console.log(result));\n }\n catch (e) {\n console.log('Error:', e);\n process.exit(1);\n }\n}\nWe also had issues with unhandled rejections sometimes caused by a session disconnecting or similar errors. To avoid Jenkins indicating our site was passing with 100% accessibility, when in reality it had not executed any tests, we instructed Jenkins to fail the job when an unhandled rejection or uncaught exception occurred:\nprocess.on('unhandledRejection', (reason, p) => {\n console.log('Unhandled Rejection at:', p, 'reason:', reason);\n process.exit(1);\n});\nprocess.on('uncaughtException', (err) => {\n console.log('Caught exception: ${err}n');\n process.exit(1);\n});\nNow it\u2019s your turn\nThat\u2019s it! That\u2019s how we automated accessibility testing for Elsevier ecommerce pages, allowing us to improve our site and make it more accessible for everyone. I hope our experience can help you automate accessibility tests on your own site, and bring the web a step closer to being accessible to all.", "year": "2017", "author": "Seren Davies", "author_slug": "serendavies", "published": "2017-12-07T00:00:00+00:00", "url": "https://24ways.org/2017/automating-your-accessibility-tests/", "topic": "code"} {"rowid": 210, "title": "Stop Leaving Animation to the Last Minute", "contents": "Our design process relies heavily on static mockups as deliverables and this makes it harder than it needs to be to incorporate UI animation in our designs. Talking through animation ideas and dancing out the details of those ideas can be fun; but it\u2019s not always enough to really evaluate or invest in animated design solutions. \nBy including deliverables that encourage discussing animation throughout your design process, you can set yourself (and your team) up for creating meaningful UI animations that feel just as much a part of the design as your colour palette and typeface. You can get out of that \u201crunning out of time to add in the animation\u201d trap by deliberately including animation in the early phases of your design process. This will give you both the space to treat animation as a design tool, and the room to iterate on UI animation ideas to come up with higher quality solutions. Two deliverables that can be especially useful for this are motion comps and animated interactive prototypes. \nMotion comps - an animation deliverable\nMotion comps (also called animatics or motion mock-ups) are usually video representation of UI animations. They are used to explore the details of how a particular animation might play out. And they\u2019re most often made with timeline-based tools like Adobe After Effects, Adobe Animate, or Tumult Hype. \nThe most useful things about motion comps is how they allow designers and developers to share the work of creating animations. (Instead of pushing all the responsibility of animation on one group or the other.) For example, imagine you\u2019re working on a design that has a content panel that can either be open or closed. You might create a mockup like the one below including the two different views: the closed state and the open state. If you\u2019re working with only static deliverables, these two artboards might be exactly what you handoff to developers along with the instruction to animate between the two. \n\nOn the surface that seems pretty straight forward, but even with this relatively simple transition there\u2019s a lot that those two artboards don\u2019t address. There are seven things that change between the closed state and the open state. That\u2019s seven things the developer building this out has to figure out how to move in and out of view, when, and in what order. And all of that is even before starting to write the code to make it work. \nBy providing only static comps, all the logic of the animation falls on the developer. This might go ok if she has the bandwidth and animation knowledge, but that\u2019s making an awful lot of assumptions.\nInstead, if you included a motion mock up like this with your static mock ups, you could share the work of figuring out the logic of the animation between design and development. Designers could work out the logic of the animation in the motion comp, exploring which items move at which times and in which order to create the opening and closing transitions. \n\nThe motion comp can also be used to iterate on different possible animation approaches before any production code has to be committed too. Sharing the work and giving yourself time to explore animation ideas before you\u2019re backed up again the deadline will lead to happier teammates and better design solutions. \nWhen to use motion comps\nI\u2019m not a fan of making more deliverables just for the sake of having more things to make, so I find it helps to narrow down what question I\u2019m trying answer before choosing which sort of deliverable to make to investigate. \nMotion comps can be most helpful for answering questions like: \n\nExactly how should this animation look? \nWhich items should move? Where? And when? \nDo the animation qualities reflect our brand or our voice and tone?\nOne of the added bonuses of creating motion comps to answer these questions is that you\u2019ll have a concrete thing to bring to design critiques or reviews to get others\u2019 input on them as well.\n\nUsing motion comps as handoff\nMotion comps are often used to handoff animation ideas from design to development. They can be super useful for this, but they\u2019re even more useful when you include the details of the motion specs with them. (It\u2019s difficult, if not impossible, to glean these details from playing back a video.)\nMore specifically, you\u2019ll want to include:\n\nDurations and the properties animated for each animation\nEasing curve values or spring values used\nDelay values and repeat counts\n\nIn many cases you\u2019ll have to collect these details up manually. But this isn\u2019t necessarily something that that will take a lot of time. If you take note of them as you\u2019re creating the motion comp, chances are most of these details will already be top of mind. (Also, if you use After Effects for your motion comps, the Inspector Spacetime plugin might be helpful for this task.)\nAnimated prototypes - an interactive deliverable\nMaking prototypes isn\u2019t a new idea for web work by any stretch, but creating prototypes that include animation \u2013 or even creating prototypes specifically to investigate potential animation solutions \u2013 can go a long way towards having higher quality animations in your final product.\nInteractive prototypes are web or app-based, or displayed in a particular tool\u2019s preview window to create a useable version of interactions that might end up in the end product. They\u2019re often made with prototyping apps like Principle, Framer, or coded up in HTML, CSS and JS directly like the example below.\nSee the Pen Prototype example by Val Head (@valhead) on CodePen.\n\nThe biggest different between motion comps and animated prototypes is the interactivity. Prototypes can reposed to taps, drags or gestures, while motion comps can only play back in a linear fashion. Generally speaking, this makes prototypes a bit more of an effort to create, but they can also help you solve different problems. The interactive nature of prototypes can also make them useful for user testing to further evaluate potential solutions. \nWhen to use prototypes\nWhen it comes to testing out animation ideas, animated prototypes can be especially helpful in answering questions like these: \n\nHow will this interaction feel to use? (Interactive animations often have different timing needs than animations that are passively viewed.)\nWhat will the animation be like with real data or real content? \nDoes this animation fit the context of the task at hand? \n\nPrototypes can be used to investigate the same questions that motion comps do if you\u2019re comfortable working in code or your prototyping tool of choice has capabilities to address high fidelity animation details. There are so many different prototyping tools out there at the moment, you\u2019re sure to be able to find one that fits your needs. \nAs a quick side note: If you\u2019re worried that your coding skills might not be up to par to prototype in code, know that prototype code doesn\u2019t have to be production quality code. Animated prototypes\u2019 main concern is working out the animation details. Once you\u2019ve arrived at a combination of animations that works, the animation specifics can be extracted or the prototype can be refactored for production.\nMotion comp or prototype?\nBoth motion comps and prototypes can be extremely useful in the design process and you can use whichever one (or ones) that best fits your team\u2019s style. The key thing that both offer is a way to make animation ideas visible and sharable. When you and your teammate are both looking at the same deliverable, you can be confident you\u2019re talking about the same thing and discuss its pros and cons more easily than just describing the idea verbally. \nMotion comps tend to be more useful earlier in the design process when you want to focus on the motion without worrying about the underlying structure or code yet. Motion comps also be great when you want to try something completely new. Some folks prefer motion comps because the tools for making them feel more familiar to them which means they can work faster. \nPrototypes are most useful for animations that rely heavily on interaction. (Getting the timing right for interactions can be tough without the interaction part sometimes.) Prototypes can also be helpful to investigate and optimize performance if that\u2019s a specific concern.\nGive them a try\nWhichever deliverables you choose to highlight your animation decisions, including them in your design reviews, critiques, or other design discussions will help you make better UI animation choices. More discussion around UI animation ideas during the design phase means greater buy-in, more room for iteration, and higher quality UI animations in your designs. Why not give them a try for your next project?", "year": "2017", "author": "Val Head", "author_slug": "valhead", "published": "2017-12-08T00:00:00+00:00", "url": "https://24ways.org/2017/stop-leaving-animation-to-the-last-minute/", "topic": "design"} {"rowid": 216, "title": "Styling Components - Typed CSS With Stylable", "contents": "There\u2019s been a lot of debate recently about how best to style components for web apps so that styles don\u2019t accidentally \u2018leak\u2019 out of the component they\u2019re meant for, or clash with other styles on the page.\nElaborate CSS conventions have sprung up, such as OOCSS, SMACSS, BEM, ITCSS, and ECSS. These work well, but they are methodologies, and require everyone in the team to know them and follow them, which can be a difficult undertaking across large or distributed teams.\nOthers just give up on CSS and put all their styles in JavaScript. Now, I\u2019m not bashing JS, especially so close to its 22nd birthday, but CSS-in-JS has problems of its own. Browsers have 20 years experience in optimising their CSS engines, so JavaScript won\u2019t be as fast as using real CSS, and in any case, this requires waiting for JS to download, parse, execute then render the styles.\nThere\u2019s another problem with CSS-in-JS, too. Since Responsive Web Design hit the streets, most designers no longer make comps in Photoshop or its equivalents; instead, they write CSS. Why hire an expensive design professional and require them to learn a new way of doing their job? \nA recent thread on Twitter asked \u201cWhat\u2019s your biggest gripe with CSS-in-JS?\u201d, and the replies were illuminating: \u201cAlways having to remember to camelCase properties then spending 10min pulling hair out when you do forget\u201d, \u201cthe cryptic domain-specific languages that each of the frameworks do just ever so slightly differently\u201d, \u201cWhen I test look and feel in browser, then I copy paste from inspector, only to have to re-write it as a JSON object\u201d, \u201cLack of linting, autocomplete, and css plug-ins for colors/ incrementing/ etc\u201d. \nIf you\u2019re a developer, and you\u2019re still unconvinced, I challenge you to let designers change the font in your IDE to Zapf Chancery and choose a new colour scheme, simply because they like it better. Does that sound like fun? Will that boost your productivity? Thought not.\nSome chums at Wix Engineering and I wanted to see if we could square this circle. Wix-hosted sites have always used CSS-in-JS (the concept isn\u2019t new; it was in Netscape 4!) but that was causing performance problems. Could we somehow devise a method of extending CSS (like SASS and LESS do) that gives us styles that are guaranteed not to leak or clash, that is compatible with code editors\u2019 autocompletion, and which could be pre-processed at build time to valid, cross-browser, static CSS?\nAfter a few months and a few proofs of concept later (drumroll), yes \u2013 we could! We call it Stylable.\nIntroducing Stylable\nStylable is a CSS pre-processor, like SASS or LESS. It uses CSS syntax so all your development tools will work. At build time, the Stylable CSS extensions are transpiled to flat, valid, cross-browser vanilla CSS for maximum performance. There\u2019s quite a bit to it, and this is a short article, so let\u2019s look at the basic concepts.\nComponents all the way down\nStylable is designed for component-based systems. Imagine you have a Gallery component. Within that, there is a Navigation component (for example, containing a \u2018next\u2019, \u2018previous\u2019, \u2018show all thumbnails\u2019, and \u2018show all albums\u2019 controls), and within that there are NavButton components. Each component is discrete, used elsewhere in the system in different contexts, perhaps maintained by different team members or even different organisations \u2014 you can use Stylable to add a typed interface to non-Stylable component libraries, as well as using it to build an app from scratch.\nFirstly, Stylable will automatically namespace styles so they only apply inside that component, by rewriting them at build time with a unique (but human-readable) prefix. So, for example,\n
might be re-written as
. \nSo far, so BEM-like (albeit without the headache of remembering a convention). But what else can it do?\nCustom pseudo-elements\nAn important feature of Stylable is the ability to reach into a component and style it from the outside, without having to know about its internal structure. Let\u2019s see the guts of a simple JSX button component in the file button.jsx:\nrender () {\n return (\n \n );\n}\n(Note:className is the JSX way of setting a class on an element; this example uses React, but Stylable itself is framework-agnostic.)\nI style it using a Stylable stylesheet (the .st.css suffix tells the preprocessor to process this file):\n/* button.st.css */\n\n/* note that the root class is automatically placed on the root HTML \nelement by Stylable React integration */\n.root {\n background: #b0e0e6;\n}\n\n.icon {\n display: block; \n height: 2em;\n background-image: url('./assets/btnIcon.svg');\n}\n\n.label {\n font-size: 1.2em;\n color: rgba(81, 12, 68, 1.0);\n}\nNote that Stylable allows all the CSS that you know and love to be included. As Drew Powers wrote in his review:\n\nwith Stylable, you get CSS, and every part of CSS. This seems like a \u201cduh\u201d observation, but this is significant if you\u2019ve ever battled with a CSS-in-JS framework over a lost or \u201chacky\u201d implementation of a basic CSS feature.\n\nI can import my Button component into another component - this time, panel.jsx:\n/* panel.jsx */\nimport * as React from 'react';\nimport {properties, stylable} from 'wix-react-tools';\nimport {Button} from '../button';\nimport style from './panel.st.css';\n\nexport const Panel = stylable(style)(() => (\n
\n
\n));\nIn panel.st.css: \n/* panel.st.css */\n:import {\n -st-from: './button.st.css';\n -st-default: Button;\n}\n\n/* cancelBtn is of type Button */\n.cancelBtn {\n -st-extends: Button;\n background: cornflowerblue;\n}\n\n/* targets the label of