24ways

Custom SQL query returning 101 rows (hide)

rowidtitlecontentsyearauthorauthor_slugpublishedurltopic
336 Practical Microformats with hCard You’ve probably heard about microformats over the last few months. You may have even read the easily digestible introduction at Digital Web Magazine, but perhaps you’ve not found time to actually implement much yet. That’s understandable, as it can sometimes be difficult to see exactly what you’re adding by applying a microformat to a page. Sure, you’re semantically enhancing the information you’re marking up, and the Semantic Web is a great idea and all, but what benefit is it right now, today? Well, the answer to that question is simple: you’re adding lots of information that can be and is being used on the web here and now. The big ongoing battle amongst the big web companies if one of territory over information. Everyone’s grasping for as much data as possible. Some of that information many of us are cautious to give away, but a lot of is happy to be freely available. Of the data you’re giving away, it makes sense to give it as much meaning as possible, thus enabling anyone from your friends and family to the giant search company down the road to make the most of it. Ok, enough of the waffle, let’s get working. Introducing hCard You may have come across hCard. It’s a microformat for describing contact information (or really address book information) from within your HTML. It’s based on the vCard format, which is the format the contacts/address book program on your computer uses. All the usual fields are available – name, address, town, website, email, you name it. If you’re running Firefox and Greasemonkey (or if you can, just to try this out), install this user script. What it does is look for instances of the hCard microformat in a page, and then add in a link to pass any hCards it finds to a web service which will convert it to a vCard. Take a look at the About the author box at the bottom of this article. It’s a hCard, so you should be able to click the icon the user script inserts and add me to your Outlook contacts or OS X Address Book with just a click. So microformats are useful after all. Free microformats all round! Implementing hCard This is the really easy bit. All the hCard microformat is, is a bunch of predefined class names that you apply to the markup you’ve probably already got around your contact information. Let’s take the example of the About the author box from this article. Here’s how the markup looks without hCard: <div class="bio"> <h3>About the author</h3> <p>Drew McLellan is a web developer, author and no-good swindler from just outside London, England. At the <a href="http://www.webstandards.org/">Web Standards Project</a> he works on press, strategy and tools. Drew keeps a <a href="http://www.allinthehead.com/">personal weblog</a> covering web development issues and themes.</p> </div> This is a really simple example because there’s only two key bits of address book information here:- my name and my website address. Let’s push it a little and say that the Web Standards Project is the organisation I work for – that gives us Name, Company and URL. To kick off an hCard, you need a containing object with a class of vcard. The div I already have with a class of bio is perfect for this – all it needs to do is contain the rest of the contact information. The next thing to identify is my name. hCard uses a class of fn (meaning Full Name) to identify a name. As is this case there’s no element surrounding my name, we can just use a span. These changes give us: <div class="bio vcard"> <h3>About the author</h3> <p><span class="fn">Drew McLellan</span> is a web developer... The two remaining items are my URL and the organisation I belong to. The class names designated for those are url and org respectively. As both of those items are links in this case, I can apply the classes to those links. So here’s the finished hCard. <div class="bio vcard"> <h3>About the author</h3> <p><span class="fn">Drew McLellan</span> is a web developer, author and no-good swindler from just outside London, England. At the <a class="org" href="http://www.webstandards.org/">Web Standards Project</a> he works on press, strategy and tools. Drew keeps a <a class="url" href="http://www.allinthehead.com/">personal weblog</a> covering web development issues and themes.</p> </div> OK, that was easy. By just applying a few easy class names to the HTML I was already publishing, I’ve implemented an hCard that right now anyone with Greasemonkey can click to add to their address book, that Google and Yahoo! and whoever else can index and work out important things like which websites are associated with my name if they so choose (and boy, will they so choose), and in the future who knows what. In terms of effort, practically nil. Where next? So that was a trivial example, but to be honest it doesn’t really get much more complex even with the most pernickety permutations. Because hCard is based on vCard (a mature and well thought-out standard), it’s all tried and tested. Here’s some good next steps. Play with the hCard Creator Take a deep breath and read the spec Start implementing hCard as you go on your own projects – it takes very little time hCard is just one of an ever-increasing number of microformats. If this tickled your fancy, I suggest subscribing to the microformats site in your RSS reader to keep in touch with new developments. What’s the take-away? The take-away is this. They may sound like just more Web 2-point-HoHoHo hype, but microformats are a well thought-out, and easy to implement way of adding greater depth to the information you publish online. They have some nice benefits right away – certainly at geek-level – but in the longer term they become much more significant. We’ve been at this long enough to know that the web has a long, long memory and that what you publish today will likely be around for years. But putting the extra depth of meaning into your documents now you can help guard that they’ll continue to be useful in the future, and not just a bunch of flat ASCII. 2005 Drew McLellan drewmclellan 2005-12-06T00:00:00+00:00 https://24ways.org/2005/practical-microformats-with-hcard/ code
335 Naughty or Nice? CSS Background Images Web Standards based development involves many things – using semantically sound HTML to provide structure to our documents or web applications, using CSS for presentation and layout, using JavaScript responsibly, and of course, ensuring that all that we do is accessible and interoperable to as many people and user agents as we can. This we understand to be good. And it is good. Except when we don’t clearly think through the full implications of using those techniques. Which often happens when time is short and we need to get things done. Here are some naughty examples of CSS background images with their nicer, more accessible counterparts. Transaction related messages I’m as guilty of this as others (or, perhaps, I’m the only one that has done this, in which case this can serve as my holiday season confessional) We use lovely little icons to show status messages for a transaction to indicate if the action was successful, or was there a warning or error? For example: “Your postal/zip code was not in the correct format.” Notice that we place a nice little icon there, and use background colours and borders to convey a specific message: there was a problem that needs to be fixed. Notice that all of this visual information is now contained in the CSS rules for that div: <div class="error"> <p>Your postal/zip code was not in the correct format.</p> </div> div.error { background: #ffcccc url(../images/error_small.png) no-repeat 5px 4px; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; } Using this approach also makes it very easy to create a div.success and div.warning CSS rules meaning we have less to change in our HTML. Nice, right? No. Naughty. Visual design communicates The CSS is being used to convey very specific information. The choice of icon, the choice of background colour and borders tell us visually that there is something wrong. With the icon as a background image – there is no way to specify any alt text for the icon, and significant meaning is lost. A screen reader user, for example, misses the fact that it is an “error.” The solution? Ask yourself: what is the bare minimum needed to indicate there was an error? Currently in the absence of CSS there will be no icon – which (I’m hoping you agree) is critical to communicating there was an error. The icon should be considered content and not simply presentational. The borders and background colour are certainly much less critical – they belong in the CSS. Lets change the code to place the image directly in the HTML and using appropriate alt text to better communicate the meaning of the icon to all users: <div class="bettererror"> <img src="images/error_small.png" alt="Error" /> <p>Your postal/zip code was not in the correct format.</p> </div> div.bettererror { background-color: #ffcccc; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; position: relative; min-height: 1.25em; } div.bettererror img { display: block; position: absolute; left: 0.25em; top: 0.25em; padding: 0; margin: 0; } div.bettererror p { position: absolute; left: 2.5em; padding: 0; margin: 0; } Compare these two examples of transactional messages Status of a Record This example is pretty straightforward. Consider the following: a real estate listing on a web site. There are three “states” for a listing: new, normal, and sold. Here’s how they look: Example of a New Listing Example of A Sold Listing If we (forgive the pun) blindly apply the “use a CSS background image” technique we clearly run into problems with the new and sold images – they actually contain content with no way to specify an alternative when placed in the CSS. In this case of the “new” image, we can use the same strategy as we used in the first example (the transaction result). The “new” image should be considered content and is placed in the HTML as part of the <h2>...</h2> that identifies the listing. However when considering the “sold” listing, there are less changes to be made to keep the same look by leaving the “SOLD” image as a background image and providing the equivalent information elsewhere in the listing – namely, right in the heading. For those that can’t see the background image, the status is communicated clearly and right away. A screen reader user that is navigating by heading or viewing a listing will know right away that a particular property is sold. Of note here is that in both cases (new and sold) placing the status near the beginning of the record helps with a zoom layout as well. Better Example of A Sold Listing Summary Remember: in the holiday season, its what you give that counts!! Using CSS background images is easy and saves time for you but think of the children. And everyone else for that matter… CSS background images should only be used for presentational images, not for those that contain content (unless that content is already represented and readily available elsewhere). 2005 Derek Featherstone derekfeatherstone 2005-12-20T00:00:00+00:00 https://24ways.org/2005/naughty-or-nice-css-background-images/ code
334 Transitional vs. Strict Markup When promoting web standards, standardistas often talk about XHTML as being more strict than HTML. In a sense it is, since it requires that all elements are properly closed and that attribute values are quoted. But there are two flavours of XHTML 1.0 (three if you count the Frameset DOCTYPE, which is outside the scope of this article), defined by the Transitional and Strict DOCTYPEs. And HTML 4.01 also comes in those flavours. The names reveal what they are about: Transitional DOCTYPEs are meant for those making the transition from older markup to modern ways. Strict DOCTYPEs are actually the default – the way HTML 4.01 and XHTML 1.0 were constructed to be used. A Transitional DOCTYPE may be used when you have a lot of legacy markup that cannot easily be converted to comply with a Strict DOCTYPE. But Strict is what you should be aiming for. It encourages, and in some cases enforces, the separation of structure and presentation, moving the presentational aspects from markup to CSS. From the HTML 4 Document Type Definition: This is HTML 4.01 Strict DTD, which excludes the presentation attributes and elements that W3C expects to phase out as support for style sheets matures. Authors should use the Strict DTD when possible, but may use the Transitional DTD when support for presentation attribute and elements is required. An additional benefit of using a Strict DOCTYPE is that doing so will ensure that browsers use their strictest, most standards compliant rendering modes. Tommy Olsson provides a good summary of the benefits of using Strict over Transitional in Ten questions for Tommy Olsson at Web Standards Group: In my opinion, using a Strict DTD, either HTML 4.01 Strict or XHTML 1.0 Strict, is far more important for the quality of the future web than whether or not there is an X in front of the name. The Strict DTD promotes a separation of structure and presentation, which makes a site so much easier to maintain. For those looking to start using web standards and valid, semantic markup, it is important to understand the difference between Transitional and Strict DOCTYPEs. For complete listings of the differences between Transitional and Strict DOCTYPEs, see XHTML: Differences between Strict & Transitional, Comparison of Strict and Transitional XHTML, and XHTML1.0 Element Attributes by DTD. Some of the differences are more likely than others to cause problems for developers moving from a Transitional DOCTYPE to a Strict one, and I’d like to mention a few of those. Elements that are not allowed in Strict DOCTYPEs center font iframe strike u Attributes not allowed in Strict DOCTYPEs align (allowed on elements related to tables: col, colgroup, tbody, td, tfoot, th, thead, and tr) language background bgcolor border (allowed on table) height (allowed on img and object) hspace name (allowed in HTML 4.01 Strict, not allowed on form and img in XHTML 1.0 Strict) noshade nowrap target text, link, vlink, and alink vspace width (allowed on img, object, table, col, and colgroup) Content model differences An element type’s content model describes what may be contained by an instance of the element type. The most important difference in content models between Transitional and Strict is that blockquote, body, and form elements may only contain block level elements. A few examples: text and images are not allowed immediately inside the body element, and need to be contained in a block level element like p or div input elements must not be direct descendants of a form element text in blockquote elements must be wrapped in a block level element like p or div Go Strict and move all presentation to CSS Something that can be helpful when doing the transition from Transitional to Strict DOCTYPEs is to focus on what each element of the page you are working on is instead of how you want it to look. Worry about looks later and get the structure and semantics right first. 2005 Roger Johansson rogerjohansson 2005-12-13T00:00:00+00:00 https://24ways.org/2005/transitional-vs-strict-markup/ code
333 The Attribute Selector for Fun and (no ad) Profit If I had a favourite CSS selector, it would undoubtedly be the attribute selector (Ed: You really need to get out more). For those of you not familiar with the attribute selector, it allows you to style an element based on the existence, value or partial value of a specific attribute. At it’s very basic level you could use this selector to style an element with particular attribute, such as a title attribute. <abbr title="Cascading Style Sheets">CSS</abbr> In this example I’m going to make all <abbr> elements with a title attribute grey. I am also going to give them a dotted bottom border that changes to a solid border on hover. Finally, for that extra bit of feedback, I will change the cursor to a question mark on hover as well. abbr[title] { color: #666; border-bottom: 1px dotted #666; } abbr[title]:hover { border-bottom-style: solid; cursor: help; } This provides a nice way to show your site users that <abbr> elements with title tags are special, as they contain extra, hidden information. Most modern browsers such as Firefox, Safari and Opera support the attribute selector. Unfortunately Internet Explorer 6 and below does not support the attribute selector, but that shouldn’t stop you from adding nice usability embellishments to more modern browsers. Internet Explorer 7 looks set to implement this CSS2.1 selector, so expect to see it become more common over the next few years. Styling an element based on the existence of an attribute is all well and good, but it is still pretty limited. Where attribute selectors come into their own is their ability to target the value of an attribute. You can use this for a variety of interesting effects such as styling VoteLinks. VoteWhats? If you haven’t heard of VoteLinks, it is a microformat that allows people to show their approval or disapproval of a links destination by adding a pre-defined keyword to the rev attribute. For instance, if you had a particularly bad meal at a restaurant, you could signify your dissaproval by adding a rev attribute with a value of vote-against. <a href="http://www.mommacherri.co.uk/" rev="vote-against">Momma Cherri's</a> You could then highlight these links by adding an image to the right of these links. a[rev="vote-against"]{ padding-right: 20px; background: url(images/vote-against.png) no-repeat right top; } This is a useful technique, but it will only highlight VoteLinks on sites you control. This is where user stylesheets come into effect. If you create a user stylesheet containing this rule, every site you visit that uses VoteLinks will receive your new style. Cool huh? However my absolute favourite use for attribute selectors is as a lightweight form of ad blocking. Most online adverts conform to industry-defined sizes. So if you wanted to block all banner-ad sized images, you could simply add this line of code to your user stylesheet. img[width="468"][height="60"], img[width="468px"][height="60px"] { display: none !important; } To hide any banner-ad sized element, such as flash movies, applets or iFrames, simply apply the above rule to every element using the universal selector. *[width="468"][height="60"], *[width="468px"][height="60px"] { display: none !important; } Just bare in mind when using this technique that you may accidentally hide something that isn’t actually an advert; it just happens to be the same size. The Interactive Advertising Bureau lists a number of common ad sizes. Using these dimensions, you can create stylesheet that blocks all the popular ad formats. Apply this as a user stylesheet and you never need to suffer another advert again. Here’s wishing you a Merry, ad-free Christmas. 2005 Andy Budd andybudd 2005-12-11T00:00:00+00:00 https://24ways.org/2005/the-attribute-selector-for-fun-and-no-ad-profit/ code
332 CSS Layout Starting Points I build a lot of CSS layouts, some incredibly simple, others that cause sleepless nights and remind me of the torturous puzzle books that were given to me at Christmas by aunties concerned for my education. However, most of the time these layouts fit quite comfortably into one of a very few standard formats. For example: Liquid, multiple column with no footer Liquid, multiple column with footer Fixed width, centred Rather than starting out with blank CSS and (X)HTML documents every time you need to build a layout, you can fairly quickly create a bunch of layout starting points, that will give you a solid basis for creating the rest of the design and mean that you don’t have to remember how a three column layout with a footer is best achieved every time you come across one! These starting points can be really basic, in fact that’s exactly what you want as the final design, the fonts, the colours and so on will be different every time. It’s just the main sections we want to be able to quickly get into place. For example, here is a basic starting point CSS and XHTML document for a fixed width, centred layout with a footer. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Fixed Width and Centred starting point document</title> <link rel="stylesheet" type="text/css" href="fixed-width-centred.css" /> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <div id="wrapper"> <div id="side"> <div class="inner"> <p>Sidebar content here</p> </div> </div> <div id="content"> <div class="inner"> <p>Your main content goes here.</p> </div> </div> <div id="footer"> <div class="inner"> <p>Ho Ho Ho!</p> </div> </div> </div> </body> </html> body { text-align: center; min-width: 740px; padding: 0; margin: 0; } #wrapper { text-align: left; width: 740px; margin-left: auto; margin-right: auto; padding: 0; } #content { margin: 0 200px 0 0; } #content .inner { padding-top: 1px; margin: 0 10px 10px 10px; } #side { float: right; width: 180px; margin: 0; } #side .inner { padding-top: 1px; margin: 0 10px 10px 10px; } #footer { margin-top: 10px; clear: both; } #footer .inner { margin: 10px; } 9 times out of 10, after figuring out exactly what main elements I have in a layout, I can quickly grab the ‘one I prepared earlier’, mark-up the relevant sections within the ready-made divs, and from that point on, I only need to worry about the contents of those different areas. The actual layout is tried and tested, one that I know works well in different browsers and that is unlikely to throw me any nasty surprises later on. In addition, considering how the layout is going to work first prevents the problem of developing a layout, then realising you need to put a footer on it, and needing to redevelop the layout as the method you have chosen won’t work well with a footer. While enjoying your mince pies and mulled wine during the ‘quiet time’ between Christmas and New Year, why not create some starting point layouts of your own? The css-discuss Wiki, CSS layouts section is a great place to find examples that you can try out and find your favourite method of creating the various layout types. 2005 Rachel Andrew rachelandrew 2005-12-04T00:00:00+00:00 https://24ways.org/2005/css-layout-starting-points/ code
331 Splintered Striper Back in March 2004, David F. Miller demonstrated a little bit of DOM scripting magic in his A List Apart article Zebra Tables. His script programmatically adds two alternating CSS background colours to table rows, making them more readable and visually pleasing, while saving the document author the tedious task of manually assigning the styling to large static data tables. Although David’s original script performs its duty well, it is nonetheless very specific and limited in its application. It only: works on a single table, identified by its id, with at least a single tbody section assigns a background colour allows two colours for odd and even rows acts on data cells, rather than rows, and then only if they have no class or background colour already defined Taking it further In a recent project I found myself needing to apply a striped effect to a medium sized unordered list. Instead of simply modifying the Zebra Tables code for this particular case, I decided to completely recode the script to make it more generic. Being more general purpose, the function in my splintered striper experiment is necessarily more complex. Where the original script only expected a single parameter (the id of the target table), the new function is called as follows: striper('[parent element tag]','[parent element class or null]','[child element tag]','[comma separated list of classes]') This new, fairly self-explanatory function: targets any type of parent element (and, if specified, only those with a certain class) assigns two or more classes (rather than just two background colours) to the child elements inside the parent preserves any existing classes already assigned to the child elements See it in action View the demonstration page for three usage examples. For simplicity’s sake, we’re making the calls to the striper function from the body’s onload attribute. In a real deployment situation, we would look at attaching a behaviour to the onload programmatically — just remember that, as we need to pass variables to the striper function, this would involve creating a wrapper function which would then be attached…something like: function stripe() { striper('tbody','splintered','tr','odd,even'); } window.onload=stripe; A final thought Just because the function is called striper does not mean that it’s limited to purely applying a striped look; as it’s more of a general purpose “alternating class assignment” script, you can achieve a whole variety of effects with it. 2005 Patrick Lauke patricklauke 2005-12-15T00:00:00+00:00 https://24ways.org/2005/splintered-striper/ code
330 An Explanation of Ems Ems are so-called because they are thought to approximate the size of an uppercase letter M (and so are pronounced emm), although 1em is actually significantly larger than this. The typographer Robert Bringhurst describes the em thus: The em is a sliding measure. One em is a distance equal to the type size. In 6 point type, an em is 6 points; in 12 point type an em is 12 points and in 60 point type an em is 60 points. Thus a one em space is proportionately the same in any size. To illustrate this principle in terms of CSS, consider these styles: #box1 { font-size: 12px; width: 1em; height: 1em; border:1px solid black; } #box2 { font-size: 60px; width: 1em; height: 1em; border: 1px solid black; } These styles will render like: M and M Note that both boxes have a height and width of 1em but because they have different font sizes, one box is bigger than the other. Box 1 has a font-size of 12px so its width and height is also 12px; similarly the text of box 2 is set to 60px and so its width and height are also 60px. 2005 Richard Rutter richardrutter 2005-12-02T00:00:00+00:00 https://24ways.org/2005/an-explanation-of-ems/ design
329 Broader Border Corners A note from the editors: Since this article was written the CSS border-radius property has become widely supported in browsers. It should be preferred to this image technique. A quick and easy recipe for turning those single-pixel borders that the kids love so much into into something a little less right-angled. Here’s the principle: We have a box with a one-pixel wide border around it. Inside that box is another box that has a little rounded-corner background image sitting snugly in one of its corners. The inner-box is then nudged out a bit so that it’s actually sitting on top of the outer box. If it’s all done properly, that little background image can mask the hard right angle of the default border of the outer-box, giving the impression that it actually has a rounded corner. Take An Image, Finely Chopped Add A Sprinkle of Markup <div id="content"> <p>Lorem ipsum etc. etc. etc.</p> </div> Throw In A Dollop of CSS #content { border: 1px solid #c03; } #content p { background: url(corner.gif) top left no-repeat; position: relative; left: -1px; top: -1px; padding: 1em; margin: 0; } Bubblin’ Hot The content div has a one-pixel wide red border around it. The paragraph is given a single instance of the background image, created to look like a one-pixel wide arc. The paragraph is shunted outside of the box – back one pixel and up one pixel – so that it is sitting over the div’s border. The white area of the image covers up that part of the border’s corner, and the arc meets up with the top and left border. Because, in this example, we’re applying a background image to a paragraph, its top margin needs to be zeroed so that it starts at the top of its container. Et voilà. Bon appétit. Extra Toppings If you want to apply a curve to each one of the corners and you run out of meaningful markup to hook the background images on to, throw some spans or divs in the mix (there’s nothing wrong with this if that’s the effect you truly want – they don’t hurt anybody) or use some nifty DOM Scripting to put the scaffolding in for you. Note that if you’ve got more than one of these relative corners, you will need to compensate for the starting position of each box which is nested in an already nudged parent. You’re not limited to one pixel wide, rounded corners – the same principles apply to thicker borders, or corners with different shapes. 2005 Patrick Griffiths patrickgriffiths 2005-12-14T00:00:00+00:00 https://24ways.org/2005/broader-border-corners/ design
328 Swooshy Curly Quotes Without Images The problem Take a quote and render it within blockquote tags, applying big, funky and stylish curly quotes both at the beginning and the end without using any images – at all. The traditional way Feint background images under the text, or an image in the markup housed in a little float. Often designers only use the opening curly quote as it’s just too difficult to float a closing one. Why is the traditional way bad? Well, for a start there are no actual curly quotes in the text (unless you’re doing some nifty image replacement). Thus with CSS disabled you’ll only have default blockquote styling to fall back on. Secondly, images don’t resize, so scaling text will have no affect on your graphic curlies. The solution Use really big text. Then it can be resized by the browser, resized using CSS, and even be restyled with a new font style if you fancy it. It’ll also make sense when CSS is unavailable. The problem Creating “Drop Caps” with CSS has been around for a while (Big Dan Cederholm discusses a neat solution in that first book of his), but drop caps are normal characters – the A to Z or 1 to 10 – and these can all be pulled into a set space and do not serve up a ton of whitespace, unlike punctuation characters. Curly quotes aren’t like traditional characters. Like full stops, commas and hashes they float within the character space and leave lots of dead white space, making it bloody difficult to manipulate them with CSS. Styles generally fit around text, so cutting into that character is tricky indeed. Also, all that extra white space is going to push into the quote text and make it look pretty uneven. This grab highlights the actual character space: See how this is emphasized when we add a normal alphabetical character within the span. This is what we’re dealing with here: Then, there’s size. Call in a curly quote at less than 300% font-size and it ain’t gonna look very big. The white space it creates will be big enough, but the curlies will be way too small. We need more like 700% (as in this example) to make an impression, but that sure makes for a big character space. Prepare the curlies Firstly, remove the opening “ from the quote. Replace it with the opening curly quote character entity “. Then replace the closing “ with the entity reference for that, which is ”. Now at least the curlies will look nice and swooshy. Add the hooks Two reasons why we aren’t using :first-letter pseudo class to manipulate the curlies. Firstly, only CSS2-friendly browsers would get what we’re doing, and secondly we need to affect the last “letter” of our text also – the closing curly quote. So, add a span around the opening curly, and a second span around the closing curly, giving complete control of the characters: <blockquote><span class="bqstart">“</span>Speech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.<span class="bqend">”</span></blockquote> So far nothing will look any different, aside form the curlies looking a bit nicer. I know we’ve just added extra markup, but the benefits as far as accessibility are concerned are good enough for me, and of course there are no images to download. The CSS OK, easy stuff first. Our first rule .bqstart floats the span left, changes the color, and whacks the font-size up to an exuberant 700%. Our second rule .bqend does the same tricks aside from floating the curly to the right. .bqstart { float: left; font-size: 700%; color: #FF0000; } .bqend { float: right; font-size: 700%; color: #FF0000; } That gives us this, which is rubbish. I’ve highlighted the actual span area with outlines: Note that the curlies don’t even fit inside the span! At this stage on IE 6 PC you won’t even see the quotes, as it only places focus on what it thinks is in the div. Also, the quote text is getting all spangled. Fiddle with margin and padding Think of that span outline box as a window, and that you need to position the curlies within that window in order to see them. By adding some small adjustments to the margin and padding it’s possible to position the curlies exactly where you want them, and remove the excess white space by defining a height: .bqstart { float: left; height: 45px; margin-top: -20px; padding-top: 45px; margin-bottom: -50px; font-size: 700%; color: #FF0000; } .bqend { float: right; height: 25px; margin-top: 0px; padding-top: 45px; font-size: 700%; color: #FF0000; } I wanted the blocks of my curlies to align with the quote text, whereas you may want them to dig in or stick out more. Be aware however that my positioning works for IE PC and Mac, Firefox and Safari. Too much tweaking seems to break the magic in various browsers at various times. Now things are fitting beautifully: I must admit that the heights, margins and spacing don’t make a lot of sense if you analyze them. This was a real trial and error job. Get it working on Safari, and IE would fail. Sort IE, and Firefox would go weird. Finished The final thing looks ace, can be resized, looks cool without styles, and can be edited with CSS at any time. Here’s a real example (note that I’m specifying Lucida Grande and then Verdana for my curlies): “Speech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.” Browsers happy As I said, too much tweaking of margins and padding can break the effect in some browsers. Even now, Firefox insists on dropping the closing curly by approximately 6 or 7 pixels, and if I adjust the padding for that, it’ll crush it into the text on Safari or IE. Weird. Still, as I close now it seems solid through resizing tests on Safari, Firefox, Camino, Opera and IE PC and Mac. Lovely. It’s probably not perfect, but together we can beat the evil typographic limitations of the web and walk together towards a brighter, more aligned world. Merry Christmas. 2005 Simon Collison simoncollison 2005-12-21T00:00:00+00:00 https://24ways.org/2005/swooshy-curly-quotes-without-images/ business
327 Improving Form Accessibility with DOM Scripting The form label element is an incredibly useful little element – it lets you link the form field unquestionably with the descriptive label text that sits alongside or above it. This is a very useful feature for people using screen readers, but there are some problems with this element. What happens if you have one piece of data that, for various reasons (validation, the way your data is collected/stored etc), needs to be collected using several form elements? The classic example is date of birth – ideally, you’ll ask for the date of birth once but you may have three inputs, one each for day, month and year, that you also need to provide hints about the format required. The problem is that to be truly accessible you need to label each field. So you end up needing something to say “this is a date of birth”, “this is the day field”, “this is the month field” and “this is the day field”. Seems like overkill, doesn’t it? And it can uglify a form no end. There are various ways that you can approach it (and I think I’ve seen them all). Some people omit the label and rely on the title attribute to help the user through; others put text in a label but make the text 1 pixel high and merging in to the background so that screen readers can still get that information. The most common method, though, is simply to set the label to not display at all using the CSS display:none property/value pairing (a technique which, for the time being, seems to work on most screen readers). But perhaps we can do more with this? The technique I am suggesting as another alternative is as follows (here comes the pseudo-code): Start with a totally valid and accessible form Ensure that each form input has a label that is linked to its related form control Apply a class to any label that you don’t want to be visible (for example superfluous) Then, through the magic of unobtrusive JavaScript/the DOM, manipulate the page as follows once the page has loaded: Find all the label elements that are marked as superfluous and hide them Find out what input element each of these label elements is related to Then apply a hint about formatting required for input (gleaned from the original, now-hidden label text) – add it to the form input as default text Finally, add in a behaviour that clears or selects the default text (as you choose) So, here’s the theory put into practice – a date of birth, grouped using a fieldset, and with the behaviours added in using DOM, and here’s the JavaScript that does the heavy lifting. But why not just use display:none? As demonstrated at Juicy Studio, display:none seems to work quite well for hiding label elements. So why use a sledge hammer to crack a nut? In all honesty, this is something of an experiment, but consider the following: Using the DOM, you can add extra levels of help, potentially across a whole form – or even range of forms – without necessarily increasing your markup (it goes beyond simply hiding labels) Screen readers today may identify a label that is set not to display, but they may not in the future – this might provide a way around By expanding this technique above, it might be possible to visually change the parent container that groups these items – in this case, a fieldset and legend, which are notoriously difficult to style consistently across different browsers – while still retaining the underlying semantic/logical structure Well, it’s an idea to think about at least. How is it for you? How else might you use DOM scripting to improve the accessiblity or usability of your forms? 2005 Ian Lloyd ianlloyd 2005-12-03T00:00:00+00:00 https://24ways.org/2005/improving-form-accessibility-with-dom-scripting/ code
326 Don't be eval() JavaScript is an interpreted language, and like so many of its peers it includes the all powerful eval() function. eval() takes a string and executes it as if it were regular JavaScript code. It’s incredibly powerful and incredibly easy to abuse in ways that make your code slower and harder to maintain. As a general rule, if you’re using eval() there’s probably something wrong with your design. Common mistakes Here’s the classic misuse of eval(). You have a JavaScript object, foo, and you want to access a property on it – but you don’t know the name of the property until runtime. Here’s how NOT to do it: var property = 'bar'; var value = eval('foo.' + property); Yes it will work, but every time that piece of code runs JavaScript will have to kick back in to interpreter mode, slowing down your app. It’s also dirt ugly. Here’s the right way of doing the above: var property = 'bar'; var value = foo[property]; In JavaScript, square brackets act as an alternative to lookups using a dot. The only difference is that square bracket syntax expects a string. Security issues In any programming language you should be extremely cautious of executing code from an untrusted source. The same is true for JavaScript – you should be extremely cautious of running eval() against any code that may have been tampered with – for example, strings taken from the page query string. Executing untrusted code can leave you vulnerable to cross-site scripting attacks. What’s it good for? Some programmers say that eval() is B.A.D. – Broken As Designed – and should be removed from the language. However, there are some places in which it can dramatically simplify your code. A great example is for use with XMLHttpRequest, a component of the set of tools more popularly known as Ajax. XMLHttpRequest lets you make a call back to the server from JavaScript without refreshing the whole page. A simple way of using this is to have the server return JavaScript code which is then passed to eval(). Here is a simple function for doing exactly that – it takes the URL to some JavaScript code (or a server-side script that produces JavaScript) and loads and executes that code using XMLHttpRequest and eval(). function evalRequest(url) { var xmlhttp = new XMLHttpRequest(); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { eval(xmlhttp.responseText); } } xmlhttp.open("GET", url, true); xmlhttp.send(null); } If you want this to work with Internet Explorer you’ll need to include this compatibility patch. 2005 Simon Willison simonwillison 2005-12-07T00:00:00+00:00 https://24ways.org/2005/dont-be-eval/ code
325 "Z's not dead baby, Z's not dead" While Mr. Moll and Mr. Budd have pipped me to the post with their predictions for 2006, I’m sure they won’t mind if I sneak in another. The use of positioning together with z-index will be one of next year’s hot techniques Both has been a little out of favour recently. For many, positioned layouts made way for the flexibility of floats. Developers I speak to often associate z-index with Dreamweaver’s layers feature. But in combination with alpha transparency support for PNG images in IE7 and full implementation of position property values, the stacking of elements with z-index is going to be big. I’m going to cover the basics of z-index and how it can be used to create designs which ‘break out of the box’. No positioning? No Z! Remember geometry? The x axis represents the horizontal, the y axis represents the vertical. The z axis, which is where we get the z-index, represents /depth/. Elements which are stacked using z-index are stacked from front to back and z-index is only applied to elements which have their position property set to relative or absolute. No positioning, no z-index. Z-index values can be either negative or positive and it is the element with the highest z-index value appears closest to the viewer, regardless of its order in the source. Furthermore, if more than one element are given the same z-index, the element which comes last in source order comes out top of the pile. Let’s take three <div>s. <div id="one"></div> <div id="two"></div> <div id="three"></div> #one { position: relative; z-index: 3; } #two { position: relative; z-index: 1; } #three { position: relative; z-index: 2; } As you can see, the <div> with the z-index of 3 will appear closest, even though it comes before its siblings in the source order. As these three <div>s have no defined positioning context in the form of a positioned parent such as a <div>, their stacking order is defined from the root element <html>. Simple stuff, but these building blocks are the basis on which we can create interesting interfaces (particularly when used in combination with image replacement and transparent PNGs). Brand building Now let’s take three more basic elements, an <h1>, <blockquote> and <p>, all inside a branding <div> which acts a new positioning context. By enclosing them inside a positioned parent, we establish a new stacking order which is independent of either the root element or other positioning contexts on the page. <div id="branding"> <h1>Worrysome.com</h1> <blockquote><p>Don' worry 'bout a thing...</p></blockquote> <p>Take the weight of the world off your shoulders.</p> </div> Applying a little positioning and z-index magic we can both set the position of these elements inside their positioning context and their stacking order. As we are going to use background images made from transparent PNGs, each element will allow another further down the stacking order to show through. This makes for some novel effects, particularly in liquid layouts. (Ed: We’re using n below to represent whatever values you require for your specific design.) #branding { position: relative; width: n; height: n; background-image: url(n); } #branding>h1 { position: absolute; left: n; top: n; width: n; height: n; background-image: url(h1.png); text-indent: n; } #branding>blockquote { position: absolute; left: n; top: n; width: n; height: n; background-image: url(bq.png); text-indent: n; } #branding>p { position: absolute; right: n; top: n; width: n; height: n; background-image: url(p.png); text-indent: n; } Next we can begin to see how the three elements build upon each other. 1. Elements outlined 2. Positioned elements overlayed to show context 3. Our final result Multiple stacking orders Not only can elements within a positioning context be given a z-index, but those positioning contexts themselves can also be stacked. Two positioning contexts, each with their own stacking order Interestingly each stacking order is independent of that of either the root element or its siblings and we can exploit this to make complex layouts from just a few semantic elements. This technique was used heavily on my recent redesign of Karova.com. Dissecting part of Karova.com First the XHTML. The default template markup used for the site places <div id="nav_main"> and <div id="content"> as siblings inside their container. <div id="container"> <div id="content"> <h2></h2> <p></p> </div> <div id="nav_main"></div> </div> By giving the navigation <div> a lower z-index than the content <div> we can ensure that the positioned content elements will always appear closest to the viewer, despite the fact that the navigation comes after the content in the source. #content { position: relative; z-index: 2; } #nav_main { position: absolute; z-index: 1; } Now applying absolute positioning with a negative top value to the <h2> and a higher z-index value than the following <p> ensures that the header sits not only on top of the navigation but also the styled paragraph which follows it. h2 { position: absolute; z-index: 200; top: -n; } h2+p { position: absolute; z-index: 100; margin-top: -n; padding-top: n; } Dissecting part of Karova.com You can see the full effect in the wild on the Karova.com site. Have a great holiday season! 2005 Andy Clarke andyclarke 2005-12-16T00:00:00+00:00 https://24ways.org/2005/zs-not-dead-baby-zs-not-dead/ design
324 Debugging CSS with the DOM Inspector An Inspector Calls The larger your site and your CSS becomes, the more likely that you will run into bizarre, inexplicable problems. Why does that heading have all that extra padding? Why is my text the wrong colour? Why does my navigation have a large moose dressed as Noel Coward on top of all the links? Perhaps you work in a collaborative environment, where developers and other designers are adding code? In which case, the likelihood of CSS strangeness is higher. You need to debug. You need Firefox’s wise-guy know-it-all, the DOM Inspector. The DOM Inspector knows where everything is in your layout, and more importantly, what causes it to look the way it does. So without further ado, load up any css based site in your copy of Firefox (or Flock for that matter), and launch the DOM Inspector from the Tools menu. The inspector uses two main panels – the left to show the DOM tree of the page, and the right to show you detail: The Inspector will look at whatever site is in the front-most window or tab, but you can also use it without another window. Type in a URL at the top (A), press ‘Inspect’ (B) and a third panel appears at the bottom, with the browser view. I find this layout handier than looking at a window behind the DOM Inspector. Step 1 – find your node! Each element on your page – be it a HTML tag or a piece of text, is called a ‘node’ of the DOM tree. These nodes are all listed in the left hand panel, with any ID or CLASS attribute values next to them. When you first look at a page, you won’t see all those yet. Nested HTML elements (such as a link inside a paragraph) have a reveal triangle next to their name, clicking this takes you one level further down. This can be fine for finding the node you want to look at, but there are easier ways. Say you have a complex rounded box technique that involves 6 nested DIVs? You’d soon get tired of clicking all those triangles to find the element you want to inspect. Click the top left icon © – “Find a node to inspect by clicking on it” and then select the area you want to inspect. Boom! All that drilling down the DOM tree has been done for you! Huzzah! If you’re looking for an element that you know has an ID (such as <ul id="navigation">), or a specific HTML tag or attribute, click the binoculars icon (D) for “Finds a node to inspect by ID, tag or attribute” (You can also use Ctrl-F or Apple-F to do this if the DOM Inspector is the frontmost window) : Type in the name and Bam! You’re there! Pressing F3 will take you to any other instances. Notice also, that when you click on a node in the inspector, it highlights where it is in the browser view with a flashing red border! Now that we’ve found the troublesome node on the page, we can find out what’s up with it… Step 2 – Debug that node! Once the node is selected, we move over to the right hand panel. Choose ‘CSS Style Rules’ from the document menu (E), and all the CSS rules that apply to that node are revealed, in the order that they load: You’ll notice that right at the top, there is a CSS file you might not recognise, with a file path beginning with “resource://”. This is the browsers default CSS, that creates the basic rendering. You can mostly ignore this, especially if use the star selector method of resetting default browser styles. Your style sheets come next. See how helpful it is? It even tells you the line number where to find the related CSS rules! If you use CSS shorthand, you’ll notice that the values are split up (e.g margin-left, margin-right etc.). Now that you can see all the style rules affecting that node, the rest is up to you! Happy debugging! 2005 Jon Hicks jonhicks 2005-12-22T00:00:00+00:00 https://24ways.org/2005/debugging-css-with-the-dom-inspector/ process
323 Introducing UDASSS! Okay. What’s that mean? Unobtrusive Degradable Ajax Style Sheet Switcher! Boy are you in for treat today ‘cause we’re gonna have a whole lotta Ajaxifida Unobtrucitosity CSS swappin’ Fun! Okay are you really kidding? Nope. I’ve even impressed myself on this one. Unfortunately, I don’t have much time to tell you the ins and outs of what I actually did to get this to work. We’re talking JavaScript, CSS, PHP…Ajax. But don’t worry about that. I’ve always believed that a good A.P.I. is an invisible A.P.I… and this I felt I achieved. The only thing you need to know is how it works and what to do. A Quick Introduction Anyway… First of all, the idea is very simple. I wanted something just like what Paul Sowden put together in Alternative Style: Working With Alternate Style Sheets from Alistapart Magazine EXCEPT a few minor (not-so-minor actually) differences which I’ve listed briefly below: Allow users to switch styles without JavaScript enabled (degradable) Preventing the F.O.U.C. before the window ‘load’ when getting preferred styles Keep the JavaScript entirely off our markup (no onclick’s or onload’s) Make it very very easy to implement (ok, Paul did that too) What I did to achieve this was used server-side cookies instead of JavaScript cookies. Hence, PHP. However this isn’t a “PHP style switcher” – which is where Ajax comes in. For the extreme technical folks, no, there is no xml involved here, or even a callback response. I only say Ajax because everyone knows what ‘it’ means. With that said, it’s the Ajax that sets the cookies ‘on the fly’. Got it? Awesome! What you need Luckily, I’ve done the work for you. It’s all packaged up in a nice zip file (at the end…keep reading for now) – so from here on out, just follow these instructions As I’ve mentioned, one of the things we’ll be working with is PHP. So, first things first, open up a file called index and save it with a ‘.php’ extension. Next, place the following text at the top of your document (even above your DOCTYPE) <?php require_once('utils/style-switcher.php'); // style sheet path[, media, title, bool(set as alternate)] $styleSheet = new AlternateStyles(); $styleSheet->add('css/global.css','screen,projection'); // [Global Styles] $styleSheet->add('css/preferred.css','screen,projection','Wog Standard'); // [Preferred Styles] $styleSheet->add('css/alternate.css','screen,projection','Tiny Fonts',true); // [Alternate Styles] $styleSheet->add('css/alternate2.css','screen,projection','Big O Fonts',true); // // [Alternate Styles] $styleSheet->getPreferredStyles(); ?> The way this works is REALLY EASY. Pay attention closely. Notice in the first line we’ve included our style-switcher.php file. Next we instantiate a PHP class called AlternateStyles() which will allow us to configure our style sheets. So for kicks, let’s just call our object $styleSheet As part of the AlternateStyles object, there lies a public method called add. So naturally with our $styleSheet object, we can call it to (da – da-da-da!) Add Style Sheets! How the add() method works The add method takes in a possible four arguments, only one is required. However, you’ll want to add some… since the whole point is working with alternate style sheets. $path can simply be a uri, absolute, or relative path to your style sheet. $media adds a media attribute to your style sheets. $title gives a name to your style sheets (via title attribute).$alternate (which shows boolean) simply tells us that these are the alternate style sheets. add() Tips For all global style sheets (meaning the ones that will always be seen and will not be swapped out), simply use the add method as shown next to // [Global Styles]. To add preferred styles, do the same, but add a ‘title’. To add the alternate styles, do the same as what we’ve done to add preferred styles, but add the extra boolean and set it to true. Note following when adding style sheets Multiple global style sheets are allowed You can only have one preferred style sheet (That’s a browser rule) Feel free to add as many alternate style sheets as you like Moving on Simply add the following snippet to the <head> of your web document: <script type="text/javascript" src="js/prototype.js"></script> <script type="text/javascript" src="js/common.js"></script> <script type="text/javascript" src="js/alternateStyles.js"></script> <?php $styleSheet->drop(); ?> Nothing much to explain here. Just use your copy & paste powers. How to Switch Styles Whether you knew it or not, this baby already has the built in ‘ubobtrusive’ functionality that lets you switch styles by the drop of any link with a class name of ‘altCss‘. Just drop them where ever you like in your document as follows: <a class="altCss" href="index.php?css=Bog_Standard">Bog Standard</a> <a class="altCss" href="index.php?css=Really_Small_Fonts">Small Fonts</a> <a class="altCss" href="index.php?css=Large_Fonts">Large Fonts</a> Take special note where the file is linking to. Yep. Just linking right back to the page we’re on. The only extra parameters we pass in is a variable called ‘css’ – and within that we append the names of our style sheets. Also take very special note on the names of the style sheets have an under_score to take place of any spaces we might have. Go ahead… play around and change the style sheet on the example page. Try disabling JavaScript and refreshing your browser. Still works! Cool eh? Well, I put this together in one night so it’s still a work in progress and very beta. If you’d like to hear more about it and its future development, be sure stop on by my site where I’ll definitely be maintaining it. Download the beta anyway Well this wouldn’t be fun if there was nothing to download. So we’re hooking you up so you don’t go home (or logoff) unhappy Download U.D.A.S.S.S | V0.8 Merry Christmas! Thanks for listening and I hope U.D.A.S.S.S. has been well worth your time and will bring many years of Ajaxy Style Switchin’ Fun! Many Blessings, Merry Christmas and have a great new year! 2005 Dustin Diaz dustindiaz 2005-12-18T00:00:00+00:00 https://24ways.org/2005/introducing-udasss/ code
322 Introduction to Scriptaculous Effects Gather around kids, because this year, much like in that James Bond movie with Denise Richards, Christmas is coming early… in the shape of scrumptuous smooth javascript driven effects at your every whim. Now what I’m going to do, is take things down a notch. Which is to say, you don’t need to know much beyond how to open a text file and edit it to follow this article. Personally, I for instance can’t code to save my life. Well, strictly speaking, that’s not entirely true. If my life was on the line, and the code needed was really simple and I wasn’t under any time constraints, then yeah maybe I could hack my way out of it But my point is this: I’m not a programmer in the traditional sense of the word. In fact, what I do best, is scrounge code off of other people, take it apart and then put it back together with duct tape, chewing gum and dumb blind luck. No, don’t run! That happens to be a good thing in this case. You see, we’re going to be implementing some really snazzy effects (which are considerably more relevant than most people are willing to admit) on your site, and we’re going to do it with the aid of Thomas Fuchs’ amazing Script.aculo.us library. And it will be like stealing candy from a child. What Are We Doing? I’m going to show you the very basics of implementing the Script.aculo.us javascript library’s Combination Effects. These allow you to fade elements on your site in or out, slide them up and down and so on. Why Use Effects at All? Before get started though, let me just take a moment to explain how I came to see smooth transitions as something more than smoke and mirror-like effects included for with little more motive than to dazzle and make parents go ‘uuh, snazzy’. Earlier this year, I had the good fortune of meeting the kind, gentle and quite knowledgable Matt Webb at a conference here in Copenhagen where we were both speaking (though I will be the first to admit my little talk on Open Source Design was vastly inferior to Matt’s talk). Matt held a talk called Fixing Broken Windows (based on the Broken Windows theory), which really made an impression on me, and which I have since then referred back to several times. You can listen to it yourself, as it’s available from Archive.org. Though since Matt’s session uses many visual examples, you’ll have to rely on your imagination for some of the examples he runs through during it. Also, I think it looses audio for a few seconds every once in a while. Anyway, one of the things Matt talked a lot about, was how our eyes are wired to react to movement. The world doesn’t flickr. It doesn’t disappear or suddenly change and force us to look for the change. Things move smoothly in the real world. They do not pop up. How it Works Once the necessary files have been included, you trigger an effect by pointing it at the ID of an element. Simple as that. Implementing the Effects So now you know why I believe these effects have a place in your site, and that’s half the battle. Because you see, actually getting these effects up and running, is deceptively simple. First, go and download the latest version of the library (as of this writing, it’s version 1.5 rc5). Unzip itand open it up. Now we’re going to bypass the instructions in the readme file. Script.aculo.us can do a bunch of quite advanced things, but all we really want from it is its effects. And by sidestepping the rest of the features, we can shave off roughly 80KB of unnecessary javascript, which is well worth it if you ask me. As with Drew’s article on Easy Ajax with Prototype, script.aculo.us also uses the Prototype framework by Sam Stephenson. But contrary to Drew’s article, you don’t have to download Prototype, as a version comes bundled with script.aculo.us (though feel free to upgrade to the latest version if you so please). So in the unzipped folder, containing the script.aculo.us files and folder, go into ‘lib’ and grab the ‘prototype.js’ file. Move it to whereever you want to store the javascript files. Then fetch the ‘effects.js’ file from the ‘src’ folder and put it in the same place. To make things even easier for you to get this up and running, I have prepared a small javascript snippet which does some checking to see what you’re trying to do. The script.aculo.us effects are all either ‘turn this off’ or ‘turn this on’. What this snippet does, is check to see what state the target currently has (is it on or off?) and then use the necessary effect. You can either skip to the end and download the example code, or copy and paste this code into a file manually (I’ll refer to that file as combo.js): Effect.OpenUp = function(element) { element = $(element); new Effect.BlindDown(element, arguments[1] || {}); } Effect.CloseDown = function(element) { element = $(element); new Effect.BlindUp(element, arguments[1] || {}); } Effect.Combo = function(element) { element = $(element); if(element.style.display == 'none') { new Effect.OpenUp(element, arguments[1] || {}); }else { new Effect.CloseDown(element, arguments[1] || {}); } } Currently, this code uses the BlindUp and BlindDown code, which I personally like, but there’s nothing wrong with you changing the effect-type into one of the other effects available. Now, include the three files in the header of your code, like so: <script src="prototype.js" type="text/javascript"></script> <script src="effects.js" type="text/javascript"></script> <script src="combo.js" type="text/javascript"></script> Now insert the element you want to use the effect on, like so: <div id="content" style="display: none;">Lorem ipsum dolor sit amet.</div> The above element will start out invisible, and when triggered will be revealed. If you want it to start visible, simply remove the style parameter. And now for the trigger <a href="javascript:Effect.Combo('content');">Click Here</a> And that, is pretty much it. Clicking the link should unfold the DIV targeted by the effect, in this case ‘content’. Effect Options Now, it gets a bit long-haired though. The documentation for script.aculo.us is next to non-existing, and because of that you’ll have to do some digging yourself to appreciate the full potentialof these effects. First of all, what kind of effects are available? Well you can go to the demo page and check them out, or you can open the ‘effects.js’ file and have a look around, something I recommend doing regardlessly, to gain an overview of what exactly you’re dealing with. If you dissect it for long enough, you can even distill some of the options available for the various effects. In the case of the BlindUp and BlindDown effect, which we’re using in our example (as triggered from combo.js), one of the things that would be interesting to play with would be the duration of the effect. If it’s too long, it will feel slow and unresponsive. Too fast and it will be imperceptible. You set the options like so: <a href="javascript:Effect.Combo('content', {duration: .2});">Click Here</a> The change from the previous link being the inclusion of , {duration: .2}. In this case, I have lowered the duration to 0.2 second, to really make it feel snappy. You can also go all-out and turn on all the bells and whistles of the Blind effect like so (slowed down to a duration of three seconds so you can see what’s going on): <a href="javascript:Effect.Combo('content', {duration: 3, scaleX: true, scaleContent: true});">Click Here</a> Conclusion And that’s pretty much it. The rest is a matter of getting to know the rest of the effects and their options as well as finding out just when and where to use them. Remember the ancient Chinese saying: Less is more. Download Example I have prepared a very basic example, which you can download and use as a reference point. 2005 Michael Heilemann michaelheilemann 2005-12-12T00:00:00+00:00 https://24ways.org/2005/introduction-to-scriptaculous-effects/ code
321 Tables with Style It might not seem like it but styling tabular data can be a lot of fun. From a semantic point of view, there are plenty of elements to tie some style into. You have cells, rows, row groups and, of course, the table element itself. Adding CSS to a paragraph just isn’t as exciting. Where do I start? First, if you have some tabular data (you know, like a spreadsheet with rows and columns) that you’d like to spiffy up, pop it into a table — it’s rightful place! To add more semantics to your table — and coincidentally to add more hooks for CSS — break up your table into row groups. There are three types of row groups: the header (thead), the body (tbody) and the footer (tfoot). You can only have one header and one footer but you can have as many table bodies as is appropriate. Sample table example Inspiration Table Striping To improve scanning information within a table, a common technique is to style alternating rows. Also known as zebra tables. Whether you apply it using a class on every other row or turn to JavaScript to accomplish the task, a handy-dandy trick is to use a semi-transparent PNG as your background image. This is especially useful over patterned backgrounds. tbody tr.odd td { background:transparent url(background.png) repeat top left; } * html tbody tr.odd td { background:#C00; filter: progid:DXImageTransform.Microsoft.AlphaImageLoader( src='background.png', sizingMethod='scale'); } We turn off the default background and apply our PNG hack to have this work in Internet Explorer. Styling Columns Did you know you could style a column? That’s right. You can add special column (col) or column group (colgroup) elements. With that you can add border or background styles to the column. <table> <col id="ingredients"> <col id="serve12"> <col id="serve24"> ... Check out the example. Fun with Backgrounds Pop in a tiled background to give your table some character! Internet Explorer’s PNG hack unfortunately only works well when applied to a cell. To figure out which background will appear over another, just remember the hierarchy: (bottom) Table → Column → Row Group → Row → Cell (top) The Future is Bright Once browser-makers start implementing CSS3, we’ll have more power at our disposal. Just with :first-child and :last-child, you can pull off a scalable version of our previous table with rounded corners and all — unfortunately, only Firefox manages to pull this one off successfully. And the selector the masses are clamouring for, nth-child, will make zebra tables easy as eggnog. 2005 Jonathan Snook jonathansnook 2005-12-19T00:00:00+00:00 https://24ways.org/2005/tables-with-style/ code
320 DOM Scripting Your Way to Better Blockquotes Block quotes are great. I don’t mean they’re great for indenting content – that would be an abuse of the browser’s default styling. I mean they’re great for semantically marking up a chunk of text that is being quoted verbatim. They’re especially useful in blog posts. <blockquote> <p>Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.</p> </blockquote> Notice that you can’t just put the quoted text directly between the <blockquote> tags. In order for your markup to be valid, block quotes may only contain block-level elements such as paragraphs. There is an optional cite attribute that you can place in the opening <blockquote> tag. This should contain a URL containing the original text you are quoting: <blockquote cite="http://en.wikipedia.org/wiki/Progressive_Enhancement"> <p>Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.</p> </blockquote> Great! Except… the default behavior in most browsers is to completely ignore the cite attribute. Even though it contains important and useful information, the URL in the cite attribute is hidden. You could simply duplicate the information with a hyperlink at the end of the quoted text: <blockquote cite="http://en.wikipedia.org/wiki/Progressive_Enhancement"> <p>Progressive Enhancement, as a label for a strategy for Web design, was coined by Steven Champeon in a series of articles and presentations for Webmonkey and the SxSW Interactive conference.</p> <p class="attribution"> <a href="http://en.wikipedia.org/wiki/Progressive_Enhancement">source</a> </p> </blockquote> But somehow it feels wrong to have to write out the same URL twice every time you want to quote something. It could also get very tedious if you have a lot of quotes. Well, “tedious” is no problem to a programming language, so why not use a sprinkling of DOM Scripting? Here’s a plan for generating an attribution link for every block quote with a cite attribute: Write a function called prepareBlockquotes. Begin by making sure the browser understands the methods you will be using. Get all the blockquote elements in the document. Start looping through each one. Get the value of the cite attribute. If the value is empty, continue on to the next iteration of the loop. Create a paragraph. Create a link. Give the paragraph a class of “attribution”. Give the link an href attribute with the value from the cite attribute. Place the text “source” inside the link. Place the link inside the paragraph. Place the paragraph inside the block quote. Close the for loop. Close the function. Here’s how that translates to JavaScript: function prepareBlockquotes() { if (!document.getElementsByTagName || !document.createElement || !document.appendChild) return; var quotes = document.getElementsByTagName("blockquote"); for (var i=0; i<quotes.length; i++) { var source = quotes[i].getAttribute("cite"); if (!source) continue; var para = document.createElement("p"); var link = document.createElement("a"); para.className = "attribution"; link.setAttribute("href",source); link.appendChild(document.createTextNode("source")); para.appendChild(link); quotes[i].appendChild(para); } } Now all you need to do is trigger that function when the document has loaded: window.onload = prepareBlockquotes; Better yet, use Simon Willison’s handy addLoadEvent function to queue this function up with any others you might want to execute when the page loads. That’s it. All you need to do is save this function in a JavaScript file and reference that file from the head of your document using <script> tags. You can style the attribution link using CSS. It might look good aligned to the right with a smaller font size. If you’re looking for something to do to keep you busy this Christmas, I’m sure that this function could be greatly improved. Here are a few ideas to get you started: Should the text inside the generated link be the URL itself? If the block quote has a title attribute, how would you take its value and use it as the text inside the generated link? Should the attribution paragraph be placed outside the block quote? If so, how would you that (remember, there is an insertBefore method but no insertAfter)? Can you think of other instances of useful information that’s locked away inside attributes? Access keys? Abbreviations? 2005 Jeremy Keith jeremykeith 2005-12-05T00:00:00+00:00 https://24ways.org/2005/dom-scripting-your-way-to-better-blockquotes/ code
319 Avoiding CSS Hacks for Internet Explorer Back in October, IEBlog issued a call to action, asking developers to clean up their CSS hacks for IE7 testing. Needless to say, a lot of hubbub ensued… both on IEBlog and elsewhere. My contribution to all of the noise was to suggest that developers review their code and use good CSS hacks. But what makes a good hack? Tantek Çelik, the Godfather of CSS hacks, gave us the answer by explaining how CSS hacks should be designed. In short, they should (1) be valid, (2) target only old/frozen/abandoned user-agents/browsers, and (3) be ugly. Tantek also went on to explain that using a feature of CSS is not a hack. Now, I’m not a frequent user of CSS hacks, but Tantek’s post made sense to me. In particular, I felt it gave developers direction on how we should be coding to accommodate that sometimes troublesome browser, Internet Explorer. But what I’ve found, through my work with other developers, is that there is still much confusion over the use of CSS hacks and IE. Using examples from the code I’ve seen recently, allow me to demonstrate how to clean up some IE-specific CSS hacks. The two hacks that I’ve found most often in the code I’ve seen and worked with are the star html bug and the underscore hack. We know these are both IE-specific by checking Kevin Smith’s CSS Filters chart. Let’s look at each of these hacks and see how we can replace them with the same CSS feature-based solution. The star html bug This hack violates Tantek’s second rule as it targets current (and future) UAs. I’ve seen this both as a stand alone rule, as well as an override to some other rule in a style sheet. Here are some code samples: * html div#header {margin-top:-3px;} .promo h3 {min-height:21px;} * html .promo h3 {height:21px;} The underscore hack This hack violates Tantek’s first two rules: it’s invalid (according to the W3C CSS Validator) and it targets current UAs. Here’s an example: ol {padding:0; _padding-left:5px;} Using child selectors We can use the child selector to replace both the star html bug and underscore hack. Here’s how: Write rules with selectors that would be successfully applied to all browsers. This may mean starting with no declarations in your rule! div#header {} .promo h3 {} ol {padding:0;} To these rules, add the IE-specific declarations. div#header {margin-top:-3px;} .promo h3 {height:21px;} ol {padding:0 0 0 5px;} After each rule, add a second rule. The selector of the second rule must use a child selector. In this new rule, correct any IE-specific declarations previously made. div#header {margin-top:-3px;} body > div#header {margin-top:0;} .promo h3 {height:21px;} .promo > h3 {height:auto; min-height:21px;} ol {padding:0 0 0 5px;} html > body ol {padding:0;} Voilà – no more hacks! There are a few caveats to this that I won’t go into… but assuming you’re operating in strict mode and barring any really complicated stuff you’re doing in your code, your CSS will still render perfectly across browsers. And while this may make your CSS slightly heftier in size, it should future-proof it for IE7 (or so I hope). Happy holidays! 2005 Kimberly Blessing kimberlyblessing 2005-12-17T00:00:00+00:00 https://24ways.org/2005/avoiding-css-hacks-for-internet-explorer/ code
318 Auto-Selecting Navigation In the article Centered Tabs with CSS Ethan laid out a tabbed navigation system which can be centred on the page. A frequent requirement for any tab-based navigation is to be able to visually represent the currently selected tab in some way. If you’re using a server-side language such as PHP, it’s quite easy to write something like class="selected" into your markup, but it can be even simpler than that. Let’s take the navigation div from Ethan’s article as an example. <div id="navigation"> <ul> <li><a href="#"><span>Home</span></a></li> <li><a href="#"><span>About</span></a></li> <li><a href="#"><span>Our Work</span></a></li> <li><a href="#"><span>Products</span></a></li> <li class="last"><a href="#"><span>Contact Us</span></a></li> </ul> </div> As you can see we have a standard unordered list which is then styled with CSS to look like tabs. By giving each tab a class which describes it’s logical section of the site, if we were to then apply a class to the body tag of each page showing the same, we could write a clever CSS selector to highlight the correct tab on any given page. Sound complicated? Well, it’s not a trivial concept, but actually applying it is dead simple. Modifying the markup First thing is to place a class name on each li in the list: <div id="navigation"> <ul> <li class="home"><a href="#"><span>Home</span></a></li> <li class="about"><a href="#"><span>About</span></a></li> <li class="work"><a href="#"><span>Our Work</span></a></li> <li class="products"><a href="#"><span>Products</span></a></li> <li class="last contact"><a href="#"><span>Contact Us</span></a></li> </ul> </div> Then, on each page of your site, apply the a matching class to the body tag to indicate which section of the site that page is in. For example, on your About page: <body class="about">...</body> Writing the CSS selector You can now write a single CSS rule to match the selected tab on any given page. The logic is that you want to match the ‘about’ tab on the ‘about’ page and the ‘products’ tab on the ‘products’ page, so the selector looks like this: body.home #navigation li.home, body.about #navigation li.about, body.work #navigation li.work, body.products #navigation li.products, body.contact #navigation li.contact{ ... whatever styles you need to show the tab selected ... } So all you need to do when you create a new page in your site is to apply a class to the body tag to say which section it’s in. The CSS will do the rest for you – without any server-side help. 2005 Drew McLellan drewmclellan 2005-12-10T00:00:00+00:00 https://24ways.org/2005/auto-selecting-navigation/ code
317 Putting the World into "World Wide Web" Despite the fact that the Web has been international in scope from its inception, the predominant mass of Web sites are written in English or another left-to-right language. Sites are typically designed visually for Western culture, and rely on an enormous body of practices for usability, information architecture and interaction design that are by and large centric to the Western world. There are certainly many reasons this is true, but as more and more Web sites realize the benefits of bringing their products and services to diverse, global markets, the more demand there will be on Web designers and developers to understand how to put the World into World Wide Web. Internationalization According to the W3C, Internationalization is: “…the design and development of a product, application or document content that enables easy localization for target audiences that vary in culture, region, or language.” Many Web designers and developers have at least heard, if not read, about Internationalization. We understand that the Web is in fact worldwide, but many of us never have the opportunity to work with Internationalization. Or, when we do, think of it in purely technical terms, such as “which character set do I use?” At first glance, it might seem to many that Internationalization is the act of making Web sites available to international audiences. And while that is in fact true, this isn’t done by broad-stroking techniques and technologies. Instead, it involves a far more narrow understanding of geographical, cultural and linguistic differences in specific areas of the world. This is referred to as localization and is the act of making a Web site make sense in the context of the region, culture and language(s) the people using the site are most familiar with. Internationalization itself includes the following technical tasks: Ensuring no barrier exists to the localization of sites. Of critical importance in the planning stages of a site for Internationalized audiences, the role of the developer is to ensure that no barrier exists. This means being able to perform such tasks as enabling Unicode and making sure legacy character encodings are properly handled. Preparing markup and CSS with Internationalization in mind. The earlier in the site development process this occurs, the better. Issues such as ensuring that you can support bidirectional text, identifying language, and using CSS to support non-Latin typographic features. Enabling code to support local, regional, language or culturally related references. Examples in this category would include time/date formats, localization of calendars, numbering systems, sorting of lists and managing international forms of addresses. Empowering the user. Sites must be architected so the user can easily choose or implement the localized alternative most appropriate to them. Localization According to the W3C, Localization is the: …adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market (a “locale”). So here’s where we get down to thinking about the more sociological and anthropological concerns. Some of the primary localization issues are: Numeric formats. Different languages and cultures use numbering systems unlike ours. So, any time we need to use numbers, such as in an ordered list, we have to have a means of representing the accurate numbering system for the locale in question. Money, honey! That’s right. I’ve got a pocketful of ugly U.S. dollars (why is U.S. money so unimaginative?). But I also have a drawer full of Japanese Yen, Australian Dollars, and Great British Pounds. Currency, how it’s calculated and how it’s represented is always a consideration when dealing with localization. Using symbols, icons and colors properly. Using certain symbols or icons on sites where they might offend or confuse is certainly not in the best interest of a site that wants to sell or promote a product, service or information type. Moreover, the colors we use are surprisingly persuasive – or detrimental. Think about colors that represent death, for example. In many parts of Asia, white is the color of death. In most of the Western world, black represents death. For Catholic Europe, shades of purple (especially lavender) have represented Christ on the cross and mourning since at least Victorian times. When Walt Disney World Europe launched an ad campaign using a lot of purple and very glitzy imagery, millions of dollars were lost as a result of this seeming subtle issue. Instead of experiencing joy and celebration at the ads, the European audience, particularly the French, found the marketing to be overly American, aggressive, depressing and basically unappealing. Along with this and other cultural blunders, Disney Europe has become a well-known case study for businesses wishing to become international. By failing to understand localization differences, and how powerful color and imagery act on the human psyche, designers and developers are put to more of a disadvantage when attempting to communicate with a given culture. Choosing appropriate references to objects and ideas. What seems perfectly natural in one culture in terms of visual objects and ideas can get confused in another environment. One of my favorite cases of this has to do with Gerber baby food. In the U.S., the baby food is marketed using a cute baby on the package. Most people in the U.S. culturally do not make an immediate association that what is being represented on the label is what is inside the container. However, when Gerber expanded to Africa, where many people don’t read, and where visual associations are less abstract, people made the inference that a baby on the cover of a jar of food represented what is in fact in the jar. You can imagine how confused and even angry people became. Using such approaches as a marketing ploy in the wrong locale can and will render the marketing a failure. As you can see, the act of localization is one that can have profound impact on the success of a business or organization as it seeks to become available to more and more people across the globe. Rethinking Design in the Context of Culture While well-educated designers and those individuals working specifically for companies that do a lot of localization understand these nuances, most of us don’t get exposed to these ideas. Yet, we begin to see how necessary it becomes to have an awareness of not just the technical aspects of Internationalization, but the socio-cultural ones within localization. What’s more, the bulk of information we have when it comes to designing sites typically comes from studies and work done on sites built in English and promoted to Western culture at large. We’re making a critical mistake by not including diverse languages and cultural issues within our usability and information architecture studies. Consider the following design from the BBC: In this case, we’re dealing with English, which is read left to right. We are also dealing with U.K. cultural norms. Notice the following: Location of of navigation Use of the color red Use of diverse symbols Mix of symbols, icons and photos Location of Search Now look at this design, which is the Arabic version of the BBC News, read right to left, and dealing with cultural norms within the Arabic-speaking world. Notice the following: Location of of navigation (location switches to the right) Use of the color blue (blue is considered the “safest” global color) No use of symbols and icons whatsoever Limitation of imagery to photos In most cases, the photos show people, not objects Location of Search Admittedly, some choices here are more obvious than others in terms of why they were made. But one thing that stands out is that the placement of search is the same for both versions. Is this the result of a specific localization decision, or based on what we believe about usability at large? This is exactly the kind of question that designers working on localization have to seek answers to, instead of relying on popular best practices and belief systems that exist for English-only Web sites. It’s a Wide World Web After All From this brief article on Internationalization, it becomes apparent that the art and science of creating sites for global audiences requires a lot more preparation and planning than one might think at first glance. Developers and designers not working to address these issues specifically due to time or awareness will do well to at least understand the basic process of making sites more culturally savvy, and better prepared for any future global expansion. One thing is certain: We not only are on a dramatic learning curve for designing and developing Web sites as it is, the need to localize sites is going to become more and more a part of the day to day work. Understanding aspects of what makes a site international and local will not only help you expand your skill set and make you more marketable, but it will also expand your understanding of the world and the people within it, how they relate to and use the Web, and how you can help make their experience the best one possible. 2005 Molly Holzschlag mollyholzschlag 2005-12-09T00:00:00+00:00 https://24ways.org/2005/putting-the-world-into-world-wide-web/ ux
316 Have Your DOM and Script It Too When working with the XMLHttpRequest object it appears you can only go one of three ways: You can stay true to the colorful moniker du jour and stick strictly to the responseXML property You can play with proprietary – yet widely supported – fire and inject the value of responseText property into the innerHTML of an element of your choosing Or you can be eval() and parse JSON or arbitrary JavaScript delivered via responseText But did you know that there’s a fourth option giving you the best of the latter two worlds? Mint uses this unmentioned approach to grab fresh HTML and run arbitrary JavaScript simultaneously. Without relying on eval(). “But wait-”, you might say, “when would I need to do this?” Besides the example below this technique is handy for things like tab groups that need initialization onload but miss the main onload event handler by a mile thanks to asynchronous scripting. Consider the problem Originally Mint used option 2 to refresh or load new tabs into individual Pepper panes without requiring a full roundtrip to the server. This was all well and good until I introduced the new Client Mode which when enabled allows anyone to view a Mint installation without being logged in. If voyeurs are afoot as Client Mode is disabled, the next time they refresh a pane the entire login page is inserted into the current document. That’s not very helpful so I needed a way to redirect the current document to the login page. Enter the solution Wouldn’t it be cool if browsers interpreted the contents of script tags crammed into innerHTML? Sure, but unfortunately, that just wasn’t meant to be. However like the body element, image elements have an onload event handler. When the image has fully loaded the handler runs the code applied to it. See where I’m going with this? By tacking a tiny image (think single pixel, transparent spacer gif – shudder) onto the end of the HTML returned by our Ajax call, we can smuggle our arbitrary JavaScript into the existing document. The image is added to the DOM, and our stowaway can go to town. <p>This is the results of our Ajax call.</p> <img src="../images/loaded.gif" alt="" onload="alert('Now that I have your attention...');" /> Please be neat So we’ve just jammed some meaningless cruft into our DOM. If our script does anything with images this addition could have some unexpected side effects. (Remember The Fly?) So in order to save that poor, unsuspecting element whose innerHTML we just swapped out from sharing Jeff Goldblum’s terrible fate we should tidy up after ourselves. And by using the removeChild method we do just that. <p>This is the results of our Ajax call.</p> <img src="../images/loaded.gif" alt="" onload="alert('Now that I have your attention...');this.parentNode.removeChild(this);" /> 2005 Shaun Inman shauninman 2005-12-24T00:00:00+00:00 https://24ways.org/2005/have-your-dom-and-script-it-too/ code
315 Edit-in-Place with Ajax Back on day one we looked at using the Prototype library to take all the hard work out of making a simple Ajax call. While that was fun and all, it didn’t go that far towards implementing something really practical. We dipped our toes in, but haven’t learned to swim yet. So here is swimming lesson number one. Anyone who’s used Flickr to publish their photos will be familiar with the edit-in-place system used for quickly amending titles and descriptions on photographs. Hovering over an item turns its background yellow to indicate it is editable. A simple click loads the text into an edit box, right there on the page. Prototype includes all sorts of useful methods to help reproduce something like this for our own projects. As well as the simple Ajax GETs we learned how to do last time, we can also do POSTs (which we’ll need here) and a whole bunch of manipulations to the user interface – all through simple library calls. Here’s what we’re building, so let’s do it. Getting Started There are two major components to this process; the user interface manipulation and the Ajax call itself. Our set-up is much the same as last time (you may wish to read the first article if you’ve not already done so). We have a basic HTML page which links in the prototype.js file and our own editinplace.js. Here’s what Santa dropped down my chimney: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Edit-in-Place with Ajax</title> <link href="editinplace.css" rel="Stylesheet" type="text/css" /> <script src="prototype.js" type="text/javascript"></script> <script src="editinplace.js" type="text/javascript"></script> </head> <body> <h1>Edit-in-place</h1> <p id="desc">Dashing through the snow on a one horse open sleigh.</p> </body> </html> So that’s our page. The editable item is going to be the <p> called desc. The process goes something like this: Highlight the area onMouseOver Clear the highlight onMouseOut If the user clicks, hide the area and replace with a <textarea> and buttons Remove all of the above if the user cancels the operation When the Save button is clicked, make an Ajax POST and show that something’s happening When the Ajax call comes back, update the page with the new content Events and Highlighting The first step is to offer feedback to the user that the item is editable. This is done by shading the background colour when the user mouses over. Of course, the CSS :hover pseudo class is a straightforward way to do this, but for three reasons, I’m using JavaScript to switch class names. :hover isn’t supported on many elements in Internet Explorer for Windows I want to keep control over when the highlight switches off after an update, regardless of mouse position If JavaScript isn’t available we don’t want to end up with the CSS suggesting it might be With this in mind, here’s how editinplace.js starts: Event.observe(window, 'load', init, false); function init(){ makeEditable('desc'); } function makeEditable(id){ Event.observe(id, 'click', function(){edit($(id))}, false); Event.observe(id, 'mouseover', function(){showAsEditable($(id))}, false); Event.observe(id, 'mouseout', function(){showAsEditable($(id), true)}, false); } function showAsEditable(obj, clear){ if (!clear){ Element.addClassName(obj, 'editable'); }else{ Element.removeClassName(obj, 'editable'); } } The first line attaches an onLoad event to the window, so that the function init() gets called once the page has loaded. In turn, init() sets up all the items on the page that we want to make editable. In this example I’ve just got one, but you can add as many as you like. The function madeEditable() attaches the mouseover, mouseout and click events to the item we’re making editable. All showAsEditable does is add and remove the class name editable from the object. This uses the particularly cunning methods Element.addClassName() and Element.removeClassName() which enable you to cleanly add and remove effects without affecting any styling the object may otherwise have. Oh, remember to add a rule for .editable to your style sheet: .editable{ color: #000; background-color: #ffffd3; } The Switch As you can see above, when the user clicks on an editable item, a call is made to the function edit(). This is where we switch out the static item for a nice editable textarea. Here’s how that function looks. function edit(obj){ Element.hide(obj); var textarea ='<div id="' + obj.id + '_editor"> <textarea id="' + obj.id + '_edit" name="' + obj.id + '" rows="4" cols="60">' + obj.innerHTML + '</textarea>'; var button = '<input id="' + obj.id + '_save" type="button" value="SAVE" /> OR <input id="' + obj.id + '_cancel" type="button" value="CANCEL" /></div>'; new Insertion.After(obj, textarea+button); Event.observe(obj.id+'_save', 'click', function(){saveChanges(obj)}, false); Event.observe(obj.id+'_cancel', 'click', function(){cleanUp(obj)}, false); } The first thing to do is to hide the object. Prototype comes to the rescue with Element.hide() (and of course, Element.show() too). Following that, we build up the textarea and buttons as a string, and then use Insertion.After() to place our new editor underneath the (now hidden) editable object. The last thing to do before we leave the user to edit is it attach listeners to the Save and Cancel buttons to call either the saveChanges() function, or to cleanUp() after a cancel. In the event of a cancel, we can clean up behind ourselves like so: function cleanUp(obj, keepEditable){ Element.remove(obj.id+'_editor'); Element.show(obj); if (!keepEditable) showAsEditable(obj, true); } Saving the Changes This is where all the Ajax fun occurs. Whilst the previous article introduced Ajax.Updater() for simple Ajax calls, in this case we need a little bit more control over what happens once the response is received. For this purpose, Ajax.Request() is perfect. We can use the onSuccess and onFailure parameters to register functions to handle the response. function saveChanges(obj){ var new_content = escape($F(obj.id+'_edit')); obj.innerHTML = "Saving..."; cleanUp(obj, true); var success = function(t){editComplete(t, obj);} var failure = function(t){editFailed(t, obj);} var url = 'edit.php'; var pars = 'id=' + obj.id + '&content=' + new_content; var myAjax = new Ajax.Request(url, {method:'post', postBody:pars, onSuccess:success, onFailure:failure}); } function editComplete(t, obj){ obj.innerHTML = t.responseText; showAsEditable(obj, true); } function editFailed(t, obj){ obj.innerHTML = 'Sorry, the update failed.'; cleanUp(obj); } As you can see, we first grab in the contents of the textarea into the variable new_content. We then remove the editor, set the content of the original object to “Saving…” to show that an update is occurring, and make the Ajax POST. If the Ajax fails, editFailed() sets the contents of the object to “Sorry, the update failed.” Admittedly, that’s not a very helpful way to handle the error but I have to limit the scope of this article somewhere. It might be a good idea to stow away the original contents of the object (obj.preUpdate = obj.innerHTML) for later retrieval before setting the content to “Saving…”. No one likes a failure – especially a messy one. If the Ajax call is successful, the server-side script returns the edited content, which we then place back inside the object from editComplete, and tidy up. Meanwhile, back at the server The missing piece of the puzzle is the server-side script for committing the changes to your database. Obviously, any solution I provide here is not going to fit your particular application. For the purposes of getting a functional demo going, here’s what I have in PHP. <?php $id = $_POST['id']; $content = $_POST['content']; echo htmlspecialchars($content); ?> Not exactly rocket science is it? I’m just catching the content item from the POST and echoing it back. For your application to be useful, however, you’ll need to know exactly which record you should be updating. I’m passing in the ID of my <div>, which is not a fat lot of use. You can modify saveChanges() to post back whatever information your app needs to know in order to process the update. You should also check the user’s credentials to make sure they have permission to edit whatever it is they’re editing. Basically the same rules apply as with any script in your application. Limitations There are a few bits and bobs that in an ideal world I would tidy up. The first is the error handling, as I’ve already mentioned. The second is that from an idealistic standpoint, I’d rather not be using innerHTML. However, the reality is that it’s presently the most efficient way of making large changes to the document. If you’re serving as XML, remember that you’ll need to replace these with proper DOM nodes. It’s also important to note that it’s quite difficult to make something like this universally accessible. Whenever you start updating large chunks of a document based on user interaction, a lot of non-traditional devices don’t cope well. The benefit of this technique, though, is that if JavaScript is unavailable none of the functionality gets implemented at all – it fails silently. It is for this reason that this shouldn’t be used as a complete replacement for a traditional, universally accessible edit form. It’s a great time-saver for those with the ability to use it, but it’s no replacement. See it in action I’ve put together an example page using the inert PHP script above. That is to say, your edits aren’t committed to a database, so the example is reset when the page is reloaded. 2005 Drew McLellan drewmclellan 2005-12-23T00:00:00+00:00 https://24ways.org/2005/edit-in-place-with-ajax/ code
314 Easy Ajax with Prototype There’s little more impressive on the web today than a appropriate touch of Ajax. Used well, Ajax brings a web interface much closer to the experience of a desktop app, and can turn a bear of an task into a pleasurable activity. But it’s really hard, right? It involves all the nasty JavaScript that no one ever does often enough to get really good at, and the browser support is patchy, and urgh it’s just so much damn effort. Well, the good news is that – ta-da – it doesn’t have to be a headache. But man does it still look impressive. Here’s how to amaze your friends. Introducing prototype.js Prototype is a JavaScript framework by Sam Stephenson designed to help make developing dynamic web apps a whole lot easier. In basic terms, it’s a JavaScript file which you link into your page that then enables you to do cool stuff. There’s loads of capability built in, a portion of which covers our beloved Ajax. The whole thing is freely distributable under an MIT-style license, so it’s good to go. What a nice man that Mr Stephenson is – friends, let us raise a hearty cup of mulled wine to his good name. Cheers! sluurrrrp. First step is to download the latest Prototype and put it somewhere safe. I suggest underneath the Christmas tree. Cutting to the chase Before I go on and set up an example of how to use this, let’s just get to the crux. Here’s how Prototype enables you to make a simple Ajax call and dump the results back to the page: var url = 'myscript.php'; var pars = 'foo=bar'; var target = 'output-div'; var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars}); This snippet of JavaScript does a GET to myscript.php, with the parameter foo=bar, and when a result is returned, it places it inside the element with the ID output-div on your page. Knocking up a basic example So to get this show on the road, there are three files we need to set up in our site alongside prototype.js. Obviously we need a basic HTML page with prototype.js linked in. This is the page the user interacts with. Secondly, we need our own JavaScript file for the glue between the interface and the stuff Prototype is doing. Lastly, we need the page (a PHP script in my case) that the Ajax is going to make its call too. So, to that basic HTML page for the user to interact with. Here’s one I found whilst out carol singing: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Easy Ajax</title> <script type="text/javascript" src="prototype.js"></script> <script type="text/javascript" src="ajax.js"></script> </head> <body> <form method="get" action="greeting.php" id="greeting-form"> <div> <label for="greeting-name">Enter your name:</label> <input id="greeting-name" type="text" /> <input id="greeting-submit" type="submit" value="Greet me!" /> </div> <div id="greeting"></div> </form> </body> </html> As you can see, I’ve linked in prototype.js, and also a file called ajax.js, which is where we’ll be putting our glue. (Careful where you leave your glue, kids.) Our basic example is just going to take a name and then echo it back in the form of a seasonal greeting. There’s a form with an input field for a name, and crucially a DIV (greeting) for the result of our call. You’ll also notice that the form has a submit button – this is so that it can function as a regular form when no JavaScript is available. It’s important not to get carried away and forget the basics of accessibility. Meanwhile, back at the server So we need a script at the server which is going to take input from the Ajax call and return some output. This is normally where you’d hook into a database and do whatever transaction you need to before returning a result. To keep this as simple as possible, all this example here will do is take the name the user has given and add it to a greeting message. Not exactly Web 2-point-HoHoHo, but there you have it. Here’s a quick PHP script – greeting.php – that Santa brought me early. <?php $the_name = htmlspecialchars($_GET['greeting-name']); echo "<p>Season's Greetings, $the_name!</p>"; ?> You’ll perhaps want to do something a little more complex within your own projects. Just sayin’. Gluing it all together Inside our ajax.js file, we need to hook this all together. We’re going to take advantage of some of the handy listener routines and such that Prototype also makes available. The first task is to attach a listener to set the scene once the window has loaded. He’s how we attach an onload event to the window object and get it to call a function named init(): Event.observe(window, 'load', init, false); Now we create our init() function to do our evil bidding. Its first job of the day is to hide the submit button for those with JavaScript enabled. After that, it attaches a listener to watch for the user typing in the name field. function init(){ $('greeting-submit').style.display = 'none'; Event.observe('greeting-name', 'keyup', greet, false); } As you can see, this is going to make a call to a function called greet() onkeyup in the greeting-name field. That function looks like this: function greet(){ var url = 'greeting.php'; var pars = 'greeting-name='+escape($F('greeting-name')); var target = 'greeting'; var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars}); } The key points to note here are that any user input needs to be escaped before putting into the parameters so that it’s URL-ready. The target is the ID of the element on the page (a DIV in our case) which will be the recipient of the output from the Ajax call. That’s it No, seriously. That’s everything. Try the example. Amaze your friends with your 1337 Ajax sk1llz. 2005 Drew McLellan drewmclellan 2005-12-01T00:00:00+00:00 https://24ways.org/2005/easy-ajax-with-prototype/ code
313 Centered Tabs with CSS Doug Bowman’s Sliding Doors is pretty much the de facto way to build tabbed navigation with CSS, and rightfully so – it is, as they say, rockin’ like Dokken. But since it relies heavily on floats for the positioning of its tabs, we’re constrained to either left- or right-hand navigation. But what if we need a bit more flexibility? What if we need to place our navigation in the center? Styling the li as a floated block does give us a great deal of control over margin, padding, and other presentational styles. However, we should learn to love the inline box – with it, we can create a flexible, centered alternative to floated navigation lists. Humble Beginnings Do an extra shot of ‘nog, because you know what’s coming next. That’s right, a simple unordered list: <div id="navigation"> <ul> <li><a href="#"><span>Home</span></a></li> <li><a href="#"><span>About</span></a></li> <li><a href="#"><span>Our Work</span></a></li> <li><a href="#"><span>Products</span></a></li> <li class="last"><a href="#"><span>Contact Us</span></a></li> </ul> </div> If we were wedded to using floats to style our list, we could easily fix the width of our ul, and trick it out with some margin: 0 auto; love to center it accordingly. But this wouldn’t net us much flexibility: if we ever changed the number of navigation items, or if the user increased her browser’s font size, our design could easily break. Instead of worrying about floats, let’s take the most basic approach possible: let’s turn our list items into inline elements, and simply use text-align to center them within the ul: #navigation ul, #navigation ul li { list-style: none; margin: 0; padding: 0; } #navigation ul { text-align: center; } #navigation ul li { display: inline; margin-right: .75em; } #navigation ul li.last { margin-right: 0; } Our first step is sexy, no? Well, okay, not really – but it gives us a good starting point. We’ve tamed our list by removing its default styles, set the list items to display: inline, and centered the lot. Adding a background color to the links shows us exactly how the different elements are positioned. Now the fun stuff. Inline Elements, Padding, and You So how do we give our links some dimensions? Well, as the CSS specification tells us, the height property isn’t an option for inline elements such as our anchors. However, what if we add some padding to them? #navigation li a { padding: 5px 1em; } I just love leading questions. Things are looking good, but something’s amiss: as you can see, the padded anchors seem to be escaping their containing list. Thankfully, it’s easy to get things back in line. Our anchors have 5 pixels of padding on their top and bottom edges, right? Well, by applying the same vertical padding to the list, our list will finally “contain” its child elements once again. ’Tis the Season for Tabbing Now, we’re finally able to follow the “Sliding Doors” model, and tack on some graphics: #navigation ul li a { background: url("tab-right.gif") no-repeat 100% 0; color: #06C; padding: 5px 0; text-decoration: none; } #navigation ul li a span { background: url("tab-left.gif") no-repeat; padding: 5px 1em; } #navigation ul li a:hover span { color: #69C; text-decoration: underline; } Finally, our navigation’s looking appropriately sexy. By placing an equal amount of padding on the top and bottom of the ul, our tabs are properly “contained”, and we can subsequently style the links within them. But what if we want them to bleed over the bottom-most border? Easy: we can simply decrease the bottom padding on the list by one pixel, like so. A Special Note for Special Browsers The Mac IE5 users in the audience are likely hopping up and down by now: as they’ve discovered, our centered navigation behaves rather annoyingly in their browser. As Philippe Wittenbergh has reported, Mac IE5 is known to create “phantom links” in a block-level element when text-align is set to any value but the default value of left. Thankfully, Philippe has documented a workaround that gets that [censored] venerable browser to behave. Simply place the following code into your CSS, and the links will be restored to their appropriate width: /**//*/ #navigation ul li a { display: inline-block; white-space: nowrap; width: 1px; } /**/ IE for Windows, however, displays an extra kind of crazy. The padding I’ve placed on my anchors is offsetting the spans that contain the left curve of my tabs; thankfully, these shenanigans are easily straightened out: /**/ * html #navigation ul li a { padding: 0; } /**/ And with that, we’re finally finished. All set. And that’s it. With your centered navigation in hand, you can finally enjoy those holiday toddies and uncomfortable conversations with your skeevy Uncle Eustace. 2005 Ethan Marcotte ethanmarcotte 2005-12-08T00:00:00+00:00 https://24ways.org/2005/centered-tabs-with-css/ code
312 Preparing to Be Badass Next Year Once we’ve eaten our way through the holiday season, people will start to think about new year’s resolutions. We tend to focus on things that we want to change… and often things that we don’t like about ourselves to “fix”. We set rules for ourselves, or try to start new habits or stop bad ones. We focus in on things we will or won’t do. For many of us the list of things we “ought” to be spending time on is just plain overwhelming – family, charity/community, career, money, health, relationships, personal development. It’s kinda scary even just listing it out, isn’t it? I want to encourage you to think differently about next year. The ever-brilliant Kathy Sierra articulates a better approach really well when talking about the attitude we should have to building great products. She tells us to think not about what the user will do with our product, but about what they are trying to achieve in the real world and how our product helps them to be badass1. When we help the user be badass, then we are really making a difference. I suppose this is one way of saying: focus not on what you will do, focus on what it will help you achieve. How will it help you be awesome? In what ways do you want to be more badass next year? A professional lens Though of course you might want to focus in on health or family or charity or community or another area next year, many people will want to become more badass in their chosen career. So let’s talk about a scaffold to help you figure out your professional / career development next year. First up, an assumption: everyone wants to be awesome. Nobody gets up in the morning aiming to be crap at their job. Nobody thinks to themselves “Today I am aiming for just south of mediocre, and if I can mess up everybody else’s ability to do good work then that will be just perfect2”. Ergo, you want to be awesome. So what does awesome look like? Danger! The big trap that people fall into when think about their professional development is to immediately focus on the things that they aren’t good at. When you ask people “what do you want to work on getting better at next year?” they frequently gravitate to the things that they believe they are bad at. Why is this a trap? Because if you focus all your time and energy on improving the areas that you suck at, you are going to end up middling at everything. Going from bad → mediocre at a given skill / behaviour takes a bunch of time and energy. So if you spend all your time going from bad → mediocre at things, what do you think you end up? That’s right, mediocre. Mediocrity is not a great career goal, kids. What do you already rock at? The much better investment of time and energy is to go from good → awesome. It often takes the same amount of relative time and energy, but wow the end result is better! So first, ask yourself and those who know you well what you are already pretty damn good at. Combat imposter syndrome by asking others. Then figure out how to double down on those things. What does brilliant look like for a given skill? What’s the knowledge or practice that you need to level yourself up even further in that thing? But what if I really really suck? Admittedly, sometimes something you suck at really is holding you back. But it’s important to separate out weaknesses (just something you suck at) from controlling weaknesses (something you suck at that actually matters for your chosen career). If skill x is just not an important thing for you to be good at, you may never need to care that you aren’t good at it. If your current role or the one you aspire to next really really requires you to be great at x, then it’s worth investing your time and energy (and possibly money too) getting better at it. So when you look at the things that you aren’t good at, which of those are actually essential for success? The right ratio A good rule of thumb is to pick three things you are already good at to work on becoming awesome at and limit yourself to one weakness that you are trying to improve on. That way you are making sure that you get to awesome in areas where you already have an advantage, and limit the amount of time you are spending on going from bad → mediocre. Levelling up learning So once you’ve figured out your areas you want to focus on next year, what do you actually decide to do? Most of all, you should try to design your day-to-day work in a way that it is also an effective learning experience. This means making sure you have a good feedback loop – you get to try something, see if it works, learn from it, rinse and repeat. It’s also about balance: you want to be challenged enough for work to be interesting, without it being so hard it’s frustrating. You want to do similar / the same things often enough that you get to learn and improve, without it being so repetitive that it’s boring. Continuously getting better at things you are already good at is actually both easier and harder than it sounds. The advantage is that it’s pretty easy to add the feedback loop to make sure that you are improving; the disadvantage is that you’re already good at these skills so you could easily just “do” without ever stopping to reflect and improve. Build in time for personal retrospectives (“What went well? What didn’t? What one thing will I choose to change next time?”) and find a way of getting feedback from outside sources as well. As for the new skills, it’s worth knowing that skill development follows a particular pattern: We all start out unconsciously incompetent (we don’t know what to do and if we tried we’d unwittingly get it wrong), progress on to conscious incompetence (we now know we’re doing it wrong) then conscious competence (we’re doing it right but wow it takes effort and attention) and eventually get to unconscious competence (automatically getting it right). Your past experiences and knowledge might let you move faster through these stages, but no one gets to skip them. Invest the time and remember you need the feedback loop to really improve. What about keeping up? Everything changes very fast in our industry. We need to invest in not falling behind, in keeping on top of what great looks like. There are a bunch of ways to do this, from reading blog posts, following links on Twitter, reading books to attending conferences or workshops, or just finding time to build things in new ways or with new technologies. Which will work best for you depends on how you best learn. Do you prefer to swallow a book? Do you learn most by building or experimenting? Whatever your learning style though, remember that there are three real needs: Scan the landscape (what’s changing, does it matter) Gain the knowledge or skills (get the detail) Apply the knowledge or skills (use it in reality) When you remember that you need all three of these things it can help you get more of what you do. For me personally, I use a combination of conferences and blogs / Twitter to scan the landscape. Half of what I want out of a conference is just a list of things to have on my radar that might become important. I then pick a couple of things to go read up on more (I personally learn most effectively by swallowing a book or spec or similar). And then I pick one thing at a time to actually apply in real life, to embed the skill / knowledge. In summary Aim to be awesome (mediocrity is not a career goal). Figure out what you already rock at. Only care about stuff you suck at that matters for your career. Pick three things to go from good → awesome and one thing to go from bad → mediocre (or mediocre → good) this year. Design learning into your daily work. Scan the landscape, learn new stuff, apply it for real. Be badass! She wrote a whole book about it. You should read it: Badass: Making Users Awesome ↩ Before you argue too vehemently: I suppose some antisocial sociopathic bastards do exist. Identify them, and then RUN AWAY FAST AS YOU CAN #realtalk ↩ 2016 Meri Williams meriwilliams 2016-12-22T00:00:00+00:00 https://24ways.org/2016/preparing-to-be-badass-next-year/ business
311 Designing Imaginative Style Guides (Living) style guides and (atomic) patterns libraries are “all the rage,” as my dear old Nana would’ve said. If articles and conference talks are to be believed, making and using them has become incredibly popular. I think there are plenty of ways we can improve how style guides look and make them better at communicating design information to creatives without it getting in the way of information that technical people need. Guides to libraries of patterns Most of my consulting work and a good deal of my creative projects now involve designing style guides. I’ve amassed a huge collection of brand guidelines and identity manuals as well as, more recently, guides to libraries of patterns intended to help designers and developers make digital products and websites. Two pages from one of my Purposeful style guide packs. Designs © Stuff & Nonsense. “Style guide” is an umbrella term for several types of design documentation. Sometimes we’re referring to static style or visual identity guides, other times voice and tone. We might mean front-end code guidelines or component/pattern libraries. These all offer something different but more often than not they have something in common. They look ugly enough to have been designed by someone who enjoys configuring a router. OK, that was mean, not everyone’s going to think an unimaginative style guide design is a problem. After all, as long as a style guide contains information people need, how it looks shouldn’t matter, should it? Inspiring not encyclopaedic Well here’s the thing. Not everyone needs to take the same information away from a style guide. If you’re looking for markup and styles to code a ‘media’ component, you’re probably going to be the technical type, whereas if you need to understand the balance of sizes across a typographic hierarchy, you’re more likely to be a creative. What you need from a style guide is different. Sure, some people1 need rules: “Do this (responsive pattern)” or “don’t do that (auto-playing video.)” Those people probably also want facts: “Use this (hexadecimal value)” and not that inaccessible colour combination.” Style guides need to do more than list facts and rules. They should demonstrate a design, not just document its parts. The best style guides are inspiring not encyclopaedic. I’ll explain by showing how many style guides currently present information about colour. Colours communicate I’m sure you’ll agree that alongside typography, colour’s one of the most important ingredients in a design. Colour communicates personality, creates mood and is vital to an easily understandable interactive vocabulary. So you’d think that an average style guide would describe all this in any number of imaginative ways. Well, you’d be disappointed, because the most inspiring you’ll find looks like a collection of chips from a paint chart. Lonely Planet’s Rizzo does a great job of separating its Design Elements from UI Components, and while its ‘Click to copy’ colour values are a thoughtful touch, you’ll struggle to get a feeling for Lonely Planet’s design by looking at their colour chips. Lonely Planet’s Rizzo style guide. Lonely Planet approach is a common way to display colour information and it’s one that you’ll also find at Greenpeace, Sky, The Times and on countless more style guides. Greenpeace, Sky and The Times style guides. GOV.UK—not a website known for its creative flair—varies this approach by using circles, which I find strange as circles don’t feature anywhere else in its branding or design. On the plus side though, their designers have provided some context by categorising colours by usage such as text, links, backgrounds and more. GOV.UK style guide. Google’s Material Design offers an embarrassment of colours but most helpfully it also advises how to combine its primary and accent colours into usable palettes. Google’s Material Design. While the ability to copy colour values from a reference might be all a technical person needs, designers need to understand why particular colours were chosen as well as how to use them. Inspiration not documentation Few style guides offer any explanation and even less by way of inspiring examples. Most are extremely vague when they describe colour: “Use colour as a presentation element for either decorative purposes or to convey information.” The Government of Canada’s Web Experience Toolkit states, rather obviously. “Certain colors have inherent meaning for a large majority of users, although we recognize that cultural differences are plentiful.” Salesforce tell us, without actually mentioning any of those plentiful differences. I’m also unsure what makes the Draft U.S. Web Design Standards colours a “distinctly American palette” but it will have to work extremely hard to achieve its goal of communicating “warmth and trustworthiness” now. In Canada, “bold and vibrant” colours reflect Alberta’s “diverse landscape.” Adding more colours to their palette has made Adobe “rich, dynamic, and multi-dimensional” and at Skype, colours are “bold, colourful (obviously) and confident” although their style guide doesn’t actually provide information on how to use them. The University of Oxford, on the other hand, is much more helpful by explaining how and why to use their colours: “The (dark) Oxford blue is used primarily in general page furniture such as the backgrounds on the header and footer. This makes for a strong brand presence throughout the site. Because it features so strongly in these areas, it is not recommended to use it in large areas elsewhere. However it is used more sparingly in smaller elements such as in event date icons and search/filtering bars.” OpenTable style guide. The designers at OpenTable have cleverly considered how to explain the hierarchy of their brand colours by presenting them and their supporting colours in various size chips. It’s also obvious from OpenTable’s design which colours are primary, supporting, accent or neutral without them having to say so. Art directing style guides For the style guides I design for my clients, I go beyond simply documenting their colour palette and type styles and describe visually what these mean for them and their brand. I work to find distinctive ways to present colour to better represent the brand and also to inspire designers. For example, on a recent project for SunLife, I described their palette of colours and how to use them across a series of art directed pages that reflect the lively personality of the SunLife brand. Information about HEX and RGB values, Sass variables and when to use their colours for branding, interaction and messaging is all there, but in a format that can appeal to both creative and technical people. SunLife style guide. Designs © Stuff & Nonsense. Purposeful style guides If you want to improve how you present colour information in your style guides, there’s plenty you can do. For a start, you needn’t confine colour information to the palette page in your style guide. Find imaginative ways to display colour across several pages to show it in context with other parts of your design. Here are two CSS gradient filled ‘cover’ pages from my Purposeful style sheets. Colour impacts other elements too, including typography, so make sure you include colour information on those pages, and vice-versa. Purposeful. Designs © Stuff & Nonsense. A visual hierarchy can be easier to understand than labelling colours as ‘primary,’ ‘supporting,’ or ‘accent,’ so find creative ways to present that hierarchy. You might use panels of different sizes or arrange boxes on a modular grid to fill a page with colour. Don’t limit yourself to rectangular colour chips, use circles or other shapes created using only CSS. If irregular shapes are a part of your brand, fill SVG silhouettes with CSS and then wrap text around them using CSS shapes. Purposeful. Designs © Stuff & Nonsense. Summing up In many ways I’m as frustrated with style guide design as I am with the general state of design on the web. Style guides and pattern libraries needn’t be dull in order to be functional. In fact, they’re the perfect place for you to try out new ideas and technologies. There’s nowhere better to experiment with new properties like CSS Grid than on your style guide. The best style guide designs showcase new approaches and possibilities, and don’t simply document the old ones. Be as creative with your style guide designs as you are with any public-facing part of your website. Purposeful are HTML and CSS style guides templates designed to help you develop creative style guides and pattern libraries for your business or clients. Save time while impressing your clients by using easily customisable HTML and CSS files that have been designed and coded to the highest standards. Twenty pages covering all common style guide components including colour, typography, buttons, form elements, and tables, plus popular pattern library components. Purposeful style guides will be available to buy online in January. Boring people ↩ 2016 Andy Clarke andyclarke 2016-12-13T00:00:00+00:00 https://24ways.org/2016/designing-imaginative-style-guides/ design
310 Fairytale of new Promise There are only four good Christmas songs. I know, yeah, JavaScript or whatever. We’ll get to that in a minute, I promise. First—and I cannot stress this enough— there are four good Christmas songs. You’re free to disagree with me here, of course, but please try to understand that you will be wrong. They don’t all have the most safe-for-work titles; I can’t list all of them here, but if you choose to let your fingers do the walkin’ to your nearest search engine, I will say that one was released by the band FEAR way back in 1982 and one was on Run the Jewels’ self-titled debut album. The lyrics are a hell of a lot worse than the titles, so maybe wait until you get home from work before you queue them up. Wear headphones, if you’ve got thin walls. For my money, though, the two I can reference by name are the top of that small heap: Tom Waits’ Christmas Card from a Hooker in Minneapolis, and The Pogues’ Fairytale of New York. The former once held the honor of being the only good Christmas song—about which which I was also unequivocally correct, right up until I changed my mind. It’s not the song up for discussion today, but feel free to familiarize yourself just the same—I’ll wait. Fairytale of New York—the top of the list—starts out by hinting at some pretty standard holiday fare; dreams and cheer and whatnot. Typical seasonal stuff, so long as you ignore that the story seems to be recounted as a drunken flashback in a jail cell. You can probably make a few guesses at the underlying spirit of the song based on that framing: following a lucky break, our bright-eyed protagonists move to New York in search of fame and fortune, only to quickly descend into bad decisions, name-calling, and vaguely festive chaos. This song speaks to me on a couple of levels, not the least of which is as a retelling of my day-to-day interactions with JavaScript. Each day’s melody might vary a little bit, granted, but the lyrics almost always follow a pretty clear arc toward “PARENTAL ADVISORY: EXPLICIT CONTENT.” You might have heard a similar tune yourself; it goes a little somethin’ like setTimeout(function() { console.log( "this should be happening last" ); }, 1000); . Callbacks are calling callbacks calling callbacks and something is happening somewhere, as the JavaScript interpreter plods through our code start-to-finish, line-by-line, step-by-step. If we need to take actions based on the results of something that could take its sweet time resolving, well, we’d better fiddle with the order of things to make sure those actions don’t happen too soon. “But I can see a better time,” as the song says, “when all our dreams come true.” So, with that Pogues brand of holiday spirit squarely in mind—by which I mean that your humble narrator is almost certainly drunk, and may be incarcerated at the time of publication—gather ’round for a story of hope, of hardships, of semi-asynchronous JavaScript programming, and ultimately: of Promise unfulfilled. The Main Thread JavaScript is single-minded, in a manner of speaking. Anything we tell the JavaScript runtime to do goes into a single-file queue; you’ll see it referred to as the “main thread,” or “UI thread.” That thread can be shared by a number of critical browser processes, like rendering and re-rendering parts of the page, and user interactions ranging from the simple—say, highlighting text—to the more complex—interacting with form elements. If that sounds a little scary to you, well, that’s because it is. The more complex our scripts, the more we’re cramming into that single-file main thread, to be processed along with—say—some of our CSS animations. Too much JavaScript clogging up the main thread means a lot of user-facing performance jankiness. Getting away from that single thread is a big part of all the excitement around Web Workers, which allow us to offload entire scripts into their own dedicated background threads—though not without limitations of their own. Outside of Web Workers, that everything-thread is the only game in town: scripts executed one thing at a time, functions calling functions calling functions, taking numbers and crowding up the same deli counter as a user’s interactions—which, in this already strained metaphor, would be ham, I guess? Asynchronous JavaScript Now, those queued actions may include asynchronous things. For example: AJAX callbacks, setTimeout/setInterval, and addEventListener won’t block the main thread while we’re waiting for a request to come back, a timer to tick away, or an event to trigger. Once those things do kick in, though, the actions they’re meant to perform will get shuffled right back into that single-thread queue. There are a couple of places you might have written asynchronously-fired JavaScript, even if you’re not super familiar with the overarching concept: XMLHttpRequest—“AJAX,” if ya nasty—or just kicking off a function once a user triggers a click or mouseenter event. Event-driven development is writ a little larger, with the overall flow of the script dictated by events, both internal and external. Writing event-driven JavaScript applications is a step in the right direction for sure—it won’t cure what ails the main thread, but it does work with the medium in a reasonable way. Event-driven development allows us to manage our use of the main thread in a way that makes sense. If any of this rings a bell for you, the motivation for Promises should feel familiar. For example, a custom init event might kick things off, and fire a create event that applies our classes and restructures our markup which, on completion, fires a bindEvents event to handle all the event listeners for user interaction. There might not sound like much difference between that and one big function that kicks off, manipulates the DOM, and binds our events line-by-line—but in a script of sufficient size and complexity we’re not only provided with a decoupled flow through the script, but obvious touchpoints for future updates and a predictable structure for ongoing maintenance. This pattern falls apart a little where we were still creating, binding, and listening for events in the same top-to-bottom, one-item-at-a-time way—we had to set a listener on a given object before the event fires, or nothing would happen: // Create the event: var event = document.createEvent( "Event" ); // Name the event: event.initEvent( "doTheStuff", true, true ); // Listen for the custom `doTheStuff` event on `window`: window.addEventListener( "doTheStuff", initializeEverything ); // Fire the custom event window.dispatchEvent( event ); This example is a little contrived, and this stuff is a lot more manageable for sure with the addition of a framework, but that’s the basic gist: create and name the event, add a listener for the event, and—after setting our listener—dispatch the event. Events and callbacks aren’t the only game in town for weaving our way in and out of the main thread, though—at least, not anymore. Promises A Promise is, at the risk of sounding sentimental, pure potential—an empty container into which a value eventually results. A Promise can exist in several states: “pending,” while the computation they contain is being performed or “resolved” once that computation is complete. Once resolved, a Promise is “fulfilled” if it gave us back something we expect, or “rejected” if it didn’t. The Promise constructor accepts a callback with two arguments: resolve and reject. We perform an action—asynchronous or otherwise—within that callback. If everything in there has gone according to plan, we call resolve. If something has gone awry, we call reject—with an error, conventionally. To illustrate, let’s tack something together with a pretty decent chance of doing what we don’t want: a promise meant only to give us the number 1, but has a chance of giving us back a 2. No reasonable person would ever do this, of course, but I wouldn’t necessarily put it past me. var promisedOne = new Promise( function( resolve, reject ) { var coinToss = Math.floor( Math.random() * 2 ) + 1; if( coinToss === 1 ) { resolve( coinToss ); } else { reject( new Error( "That ain’t a one." ) ); } }); There’s nothing too surprising in there, after you boil it all down. It’s a little return-y, with the exception that we’re flagging results as “as expected” or “something went wrong.” Tapping into that Promise uses another new keyword: then—and as someone who attempts to make sense of JavaScript by breaking it down to plain ol’ human-language, I’m a big fan of this syntax. then is tacked onto our Promise identifier, and does just what it says on the tin: once the Promise is resolved, then do one of two things, both supplied as callbacks: the first in the case of a fulfilled promise, and the second in the case of a rejected one. Those two callbacks will have, as arguments, the results we specified with resolve orreject, respectively. It sounds like a lot in prose, but in code it’s a pretty simple pattern: promisedOne.then( function( result ) { console.log( result ); }, function( error ) { console.error( error ); }); If you’ve spent any time working with AJAX—jQuery-wise, in particular—you’ve seen something like this pattern before: a success callback and an error callback. The state of a promise, once fulfilled or rejected, cannot be changed—any reference we make to promisedOne will have a single, fixed result. It may not look like too much the way I’m using it here, but it’s powerful stuff—a pattern for asynchronously resolving anything. I’ve recently used Promises alongside a script that emulates Font Load Events, to apply webfonts asynchronously and avoid a potential performance hit. Font Face Observer allows us to, as the name implies, determine when the files referenced by our @font-face rules have finished loading. var fontObserver = new FontFaceObserver( "Fancy Font" ); fontObserver.check().then(function() { document.documentElement.className += " fonts-loaded"; }, function( error ) { console.error( error ); }); fontObserver.check() gives us back a Promise, allowing us to chain on a then containing our callbacks for success and failure. We use the fulfilled callback to bolt a class onto the page once the font file has been fully transferred. We don’t bother including an argument in the first function, since we don’t care about the result itself so much as we care that the promise resolved without error—we’re not doing anything with the resolved value, just adding a class to the page. We do include the error argument, since we’ll want to know what happened should something go wrong. Now, this isn’t the tidiest syntax around—at least to my eyes—with those two functions just kinda floating in a then. Luckily there’s an similar alternative syntax; one that I find a bit easier to parse at-a-glance: fontObserver.check() .then(function() { document.documentElement.className += " fonts-loaded"; }) .catch(function( error ) { console.log( error ); }); The first callback inside then provides us with our success state, while the catch provides us with a single, explicit “something went wrong” callback. The two syntaxes aren’t completely identical in all situations, but for a simple case like this, I find it a little neater. The Common Thread I guess I still owe you an explanation, huh. Not about the JavaScript-whatever; I think I’ve explained that plenty. No, I mean Fairytale of New York, and why it’s perched up there at the top of the four (4) song heap. Fairytale is a sad song, ostensibly. If you follow the main thread—start to finish, line-by-line, step by step— Fairytale is a sad song. And I can see you out there, visions of Die Hard dancing in your heads: “but is it a Christmas song?” Well, for my money, nothing says “holidays” quite like unreliable narration. Shane MacGowan, the song’s author, has placed the first verse about “Christmas Eve in the drunk tank” as happening right after the “lucky one, came in eighteen-to-one”—not at the chronological end of the story. That means the song might not be mostly drunken flashback, but all of it a single, overarching flashback including a Christmas Eve in protective custody. It could be that the man and woman are, together, recounting times long past—good times and bad times—maybe not even in chronological order. Hell, the “NYPD Choir” mentioned in the chorus? There’s no such thing. We’re not big Christmas folks, my family and I. But just the same, every year, the handful of us get together, and every year—like clockwork—there’s a lull in conversation, there’s a sharp exhale, and Ma says “we all made it.” Not to a house, not to a dinner, but through another year, to another Christmas. At this point, without fail, someone starts telling a story—and one begets another, and so on. Sometimes the stories are happy, sometimes they’re sad, more often than not they’re both. Some are about things we were lucky to walk away from, some are about a time when another one of us didn’t. Start-to-finish, line-by-line, step-by-step, the main thread through the year doesn’t change, and maybe there isn’t a whole lot we can do to change it. But by carefully weaving our way in and out of that thread—stories all out of sync and resolving one way or the other, with the results determined by questionably reliable narrators—we can change the way we interact with it and, little by little, we can start making sense of it. 2016 Mat Marquis matmarquis 2016-12-19T00:00:00+00:00 https://24ways.org/2016/fairytale-of-new-promise/ code
309 HTTP/2 Server Push and Service Workers: The Perfect Partnership Being a web developer today is exciting! The web has come a long way since its early days and there are so many great technologies that enable us to build faster, better experiences for our users. One of these technologies is HTTP/2 which has a killer feature known as HTTP/2 Server Push. During this year’s Chrome Developer Summit, I watched a really informative talk by Sam Saccone, a Software Engineer on the Google Chrome team. He gave a talk entitled Planning for Performance, and one of the topics that he covered immediately piqued my interest; the idea that HTTP/2 Server Push and Service Workers were the perfect web performance combination. If you’ve never heard of HTTP/2 Server Push before, fear not - it’s not as scary as it sounds. HTTP/2 Server Push simply allows the server to send data to the browser without having to wait for the browser to explicitly request it first. In this article, I am going to run through the basics of HTTP/2 Server Push and show you how, when combined with Service Workers, you can deliver the ultimate in web performance to your users. What is HTTP/2 Server Push? When a user navigates to a URL, a browser will make an HTTP request for the underlying web page. The browser will then scan the contents of the HTML document for any assets that it may need to retrieve such as CSS, JavaScript or images. Once it finds any assets that it needs, it will then make multiple HTTP requests for each resource that it needs and begin downloading one by one. While this approach works well, the problem is that each HTTP request means more round trips to the server before any data arrives at the browser. These extra round trips take time and can make your web pages load slower. Before we go any further, let’s see what this might look like when your browser makes a request for a web page. If you were to view this in the developer tools of your browser, it might look a little something like this: As you can see from the image above, once the HTML file has been downloaded and parsed, the browser then makes HTTP requests for any assets that it needs. This is where HTTP/2 Server Push comes in. The idea behind HTTP/2 Server Push is that when the browser requests a web page from the server, the server already knows about all the assets that are needed for the web page and “pushes” it to browser. This happens when the first HTTP request for the web page takes place and it eliminates an extra round trip, making your site faster. Using the same example above, let’s “push” the JavaScript and CSS files instead of waiting for the browser to request them. The image below gives you an idea of what this might look like. Whoa, that looks different - let’s break it down a little. Firstly, you can see that the JavaScript and CSS files appear earlier in the waterfall chart. You might also notice that the loading times for the files are extremely quick. The browser doesn’t need to make an extra HTTP request to the server, instead it receives the critical files it needs all at once. Much better! There are a number of different approaches when it comes to implementing HTTP/2 Server Push. Adoption is growing and many commercial CDNs such as Akamai and Cloudflare already offer support for Server Push. You can even roll your own implementation depending on your environment. I’ve also previously blogged about building a basic HTTP/2 Server Push example using Node.js. In this post, I’m not going to dive into how to implement HTTP/2 Server Push as that is an entire post in itself! However, I do recommend reading this article to find out more about the inner workings. HTTP/2 Server Push is awesome, but it isn’t a magic bullet. It is fantastic for improving the load time of a web page when it first loads for a user, but it isn’t that great when they request the same web page again. The reason for this is that HTTP/2 Server Push is not cache “aware”. This means that the server isn’t aware about the state of your client. If you’ve visited a web page before, the server isn’t aware of this and will push the resource again anyway, regardless of whether or not you need it. HTTP/2 Server Push effectively tells the browser that it knows better and that the browser should receive the resources whether it needs them or not. In theory browsers can cancel HTTP/2 Server Push requests if they’re already got something in cache but unfortunately no browsers currently support it. The other issue is that the server will have already started to send some of the resource to the browser by the time the cancellation occurs. HTTP/2 Server Push & Service Workers So where do Service Workers fit in? Believe it or not, when combined together HTTP/2 Server Push and Service Workers can be the perfect web performance partnership. If you’ve not heard of Service Workers before, they are worker scripts that run in the background of your website. Simply put, they act as middleman between the client and the browser and enable you to intercept any network requests that come and go from the browser. They are packed with useful features such as caching, push notifications, and background sync. Best of all, they are written in JavaScript, making it easy for web developers to understand. Using Service Workers, you can easily cache assets on a user’s device. This means when a browser makes an HTTP request for an asset, the Service Worker is able to intercept the request and first check if the asset already exists in cache on the users device. If it does, then it can simply return and serve them directly from the device instead of ever hitting the server. Let’s stop for a second and analyse what that means. Using HTTP/2 Server Push, you are able to push critical assets to the browser before the browser requests them. Then, using Service Workers you are able to cache these resources so that the browser never needs to make a request to the server again. That means a super fast first load and an even faster second load! Let’s put this into action. The following HTML code is a basic web page that retrieves a few images and two JavaScript files. <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>HTTP2 Push Demo</title> </head> <body> <h1>HTTP2 Push</h1> <img src="./images/beer-1.png" width="200" height="200" /> <img src="./images/beer-2.png" width="200" height="200" /> <br> <br> <img src="./images/beer-3.png" width="200" height="200" /> <img src="./images/beer-4.png" width="200" height="200" /> <!-- Scripts --> <script async src="./js/promise.min.js"></script> <script async src="./js/fetch.js"></script> <script> // Register the service worker if ('serviceWorker' in navigator) { navigator.serviceWorker.register('./service-worker.js').then(function(registration) { // Registration was successful console.log('ServiceWorker registration successful with scope: ', registration.scope); }).catch(function(err) { // registration failed :( console.log('ServiceWorker registration failed: ', err); }); } </script> </body> </html> In the HTML code above, I am registering a Service Worker file named service-worker.js. In order to start caching assets, I am going to use the Service Worker toolbox . It is a lightweight helper library to help you get started creating your own Service Workers. Using this library, we can actually cache the base web page with the path /push. The Service Worker Toolbox comes with a built-in routing system which is based on the same routing as Express. With just a few lines of code, you can start building powerful caching patterns. I’ve add the following code to the service-worker.js file. (global => { 'use strict'; // Load the sw-toolbox library. importScripts('/js/sw-toolbox/sw-toolbox.js'); // The route for any requests toolbox.router.get('/push', global.toolbox.fastest); toolbox.router.get('/images/(.*)', global.toolbox.fastest); toolbox.router.get('/js/(.*)', global.toolbox.fastest); // Ensure that our service worker takes control of the page as soon as possible. global.addEventListener('install', event => event.waitUntil(global.skipWaiting())); global.addEventListener('activate', event => event.waitUntil(global.clients.claim())); })(self); Let’s break this code down further. Around line 4, I am importing the Service Worker toolbox. Next, I am specifying a route that will listen to any requests that match the URL /push. Because I am also interested in caching the images and JavaScript for that page, I’ve told the toolbox to listen to these routes too. The best thing about the code above is that if any of the assets exist in cache, we will instantly return the cached version instead of waiting for it to download. If the asset doesn’t exist in cache, the code above will add it into cache so that we can retrieve it when it’s needed again. You may also notice the code global.toolbox.fastest - this is important because gives you the compromise of fulfilling from the cache immediately, while firing off an additional HTTP request updating the cache for the next visit. But what does this mean when combined with HTTP/2 Server Push? Well, it means that on the first load of the web page, you are able to “push” everything to the user at once before the browser has even requested it. The Service Worker activates and starts caching the assets on the users device. The next time a user visits the page, the Service Worker will intercept the request and serve the asset directly from cache. Amazing! Using this technique, the waterfall chart for a repeat visit should look like the image below. If you look closely at the image above, you’ll notice that the web page returns almost instantly without ever making an HTTP request over the network. Using the Service Worker library, we cached the base page for the route /push, which allowed us to retrieve this directly from cache. Whether used on their own or combined together, the best thing about these two features is that they are the perfect progressive enhancement. If your user’s browser doesn’t support them, they will simply fall back to HTTP/1.1 without Service Workers. Your users may not experience as fast a load time as they would with these two techniques, but it would be no different from their normal experience. HTTP/2 Server Push and Service Workers are really the perfect partners when it comes to web performance. Summary When used correctly, HTTP/2 Server Push and Service Workers can have a positive impact on your site’s load times. Together they mean super fast first load times and even faster repeat views to a web page. Whilst this technique is really effective, it’s worth noting that HTTP/2 push is not a magic bullet. Think about the situations where it might make sense to use it and don’t just simply “push” everything; it could actually lead to having slower page load times. If you’d like to learn more about the rules of thumb for HTTP/2 Server Push, I recommend reading this article for more information. All of the code in this example is available on my Github repo - if you have any questions, please submit an issue and I’ll get back to you as soon as possible. If you’d like to learn more about this technique and others relating to HTTP/2, I highly recommend watching Sam Saccone’s talk at this years Chrome Developer Summit. I’d also like to say a massive thank you to Robin Osborne, Andy Davies and Jeffrey Posnick for helping me review this article before putting it live! 2016 Dean Hume deanhume 2016-12-15T00:00:00+00:00 https://24ways.org/2016/http2-server-push-and-service-workers/ code
308 How to Make a Chrome Extension to Delight (or Troll) Your Friends If you’re like me, you grew up drawing mustaches on celebrities. Every photograph was subject to your doodling wrath, and your brilliance was taken to a whole new level with computer programs like Microsoft Paint. The advent of digital cameras meant that no one was safe from your handiwork, especially not your friends. And when you finally got your hands on Photoshop, you spent hours maniacally giggling at your artistic genius. But today is different. You’re a serious adult with important things to do and a reputation to uphold. You keep up with modern web techniques and trends, and have little time for fun other than a random Giphy on Slack… right? Nope. If there’s one thing 2016 has taught me, it’s that we—the self-serious, world-changing tech movers and shakers of the universe—haven’t changed one bit from our younger, more delightable selves. How do I know? This year I created a Chrome extension called Tabby Cat and watched hundreds of thousands of people ditch productivity for randomly generated cats. Tabby Cat replaces your new tab page with an SVG cat featuring a silly name like “Stinky Dinosaur” or “Tiny Potato”. Over time, the cats collect goodies that vary in absurdity from fishbones to lawn flamingos to Raybans. Kids and adults alike use this extension, and analytics show the majority of use happens Monday through Friday from 9-5. The popularity of Tabby Cat has convinced me there’s still plenty of room in our big, grown-up hearts for fun. Today, we’re going to combine the formula behind Tabby Cat with your intrinsic desire to delight (or troll) your friends, and create a web app that generates your friends with random objects and environments of your choosing. You can publish it as a Chrome extension to replace your new tab, or simply host it as a website and point to it with the New Tab Redirect extension. Here’s a sneak peek at my final result featuring my partner, my cat, and I in cheerfully weird accessories. Your result will look however you want it to. Along the way, we’ll cover how to build a Chrome extension that replaces the new tab page, and explore ways to program randomness into your work to create something truly delightful. What you’ll need Adobe Illustrator (or a similar illustration program to export PNG) Some images of your friends A text editor Note: This can be as simple or as complex as you want it to be. Most of the application is pre-built so you can focus on kicking back and getting in touch with your creative side. If you want to dive in deeper, you’ll find ways to do it. Getting started Download a local copy of the boilerplate for today’s tutorial here, and open it in a text editor. Inside, you’ll find a simple web app that you can run in Chrome. Open index.html in Chrome. You should see a grey page that says “Noname”. Open template.pdf in Adobe Illustrator or a similar program that can export PNG. The file contains an artboard measuring 800px x 800px, with a dotted blue outline of a face. This is your template. Note: We’re using Google Chrome to build and preview this application because the end-result is a Chrome extension. This means that the application isn’t totally cross-browser compatible, but that’s okay. Step 1: Gather your friends The first thing to do is choose who your muses are. Since the holidays are upon us, I’d suggest finding inspiration in your family. Create your artwork For each person, find an image where their face is pointed as forward as possible. Place the image onto the Artwork layer of the Illustrator file, and line up their face with the template. Then, rename the artboard something descriptive like face_bob. Here’s my crew: As you can see, my use of the word “family” extends to cats. There’s no judgement here. Notice that some of my photos don’t completely fill the artboard–that’s fine. The images will be clipped into ovals when they’re rendered in the application. Now, export your images by following these steps: Turn the Template layer off and export the images as PNGs. In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your faces. Export at 72ppi to keep things running fast. Save your images into the images/ folder in your project. Add your images to config.js Open scripts/config.js. This is where you configure your extension. Add key value pairs to the faces object. The key should be the person’s name, and the value should be the filepath to the image. faces: { leslie: 'images/face_leslie.png', kyle: 'images/face_kyle.png', beep: 'images/face_beep.png' } The application will choose one of these options at random each time you open a new tab. This pattern is used for everything in the config file. You give the application groups of choices, and it chooses one at random each time it loads. The only thing that’s special about the faces object is that person’s name will also be displayed when their face is chosen. Now, when you refresh the project in Chrome, you should see one of your friends along with their name, like this: Congrats, you’re off and running! Step 2: Add adjectives Now that you’ve loaded your friends into the application, it’s time to call them names. This step definitely yields the most laughs for the least amount of effort. Add a list of adjectives into the prefixes array in config.js. To get the words flowing, I took inspiration from ways I might describe some of my relatives during a holiday gathering… prefixes: [ 'Loving', 'Drunk', 'Chatty', 'Merry', 'Creepy', 'Introspective', 'Cheerful', 'Awkward', 'Unrelatable', 'Hungry', ... ] When you refresh Chrome, you should see one of these words prefixed before your friend’s name. Voila! Step 3: Choose your color palette Real talk: I’m bad at choosing color palettes, so I have a trick up my sleeve that I want to share with you. If you’ve been blessed with the gift of color aptitude, skip ahead. How to choose colors To create a color palette, I start by going to a Coolors.co, and I hit the spacebar until I find a palette that I like. We need a wide gamut of hues for our palette, so lock down colors you like and keep hitting the spacebar until you find a nice, full range. You can use as many or as few colors as you like. Copy these colors into your swatches in Adobe Illustrator. They’ll be the base for any illustrations you create later. Now you need a set of background colors. Here’s my trick to making these consistent with your illustration palette without completely blending in. Use the “Adjust Palette” tool in Coolors to dial up the brightness a few notches, and the saturation down just a tad to remove any neon effect. These will be your background colors. Add your background colors to config.js Copy your hex codes into the bgColors array in config.js. bgColors: [ '#FFDD77', '#FF8E72', '#ED5E84', '#4CE0B3', '#9893DA', ... ] Now when you go back to Chrome and refresh the page, you’ll see your new palette! Step 4: Accessorize This is the fun part. We’re going to illustrate objects, accessories, lizards—whatever you want—and layer them on top of your friends. Your objects will be categorized into groups, and one option from each group will be randomly chosen each time you load the page. Think of a group like “hats” or “glasses”. This will allow combinations of accessories to show at once, without showing two of the same type on the same person. Create a group of accessories To get started, open up Illustrator and create a new artboard out of the template. Think of a group of objects that you can riff on. I found hats to be a good place to start. If you don’t feel like illustrating, you can use cut-out images instead. Next, follow the same steps as you did when you exported the faces. Here they are again: Turn the Template layer off and export the images as PNGs. In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your hats. Export at 72ppi to keep things running fast. Save your images into the images/ folder in your project. Add your accessories to config.js In config.js, add a new key to the customProps object that describes the group of accessories that you just created. Its value should be an array of the filepaths to your images. This is my hats array: customProps: { hats: [ 'images/hat_crown.png', 'images/hat_santa.png', 'images/hat_tophat.png', 'images/hat_antlers.png' ] } Refresh Chrome and behold, accessories! Create as many more accessories as you want Repeat the steps above to create as many groups of accessories as you want. I went on to make glasses and hairstyles, so my final illustrator file looks like this: The last step is adding your new groups to the config object. List your groups in the order that you want them to be stacked in the DOM. My final output will be hair, then hats, then glasses: customProps: { hair: [ 'images/hair_bowl.png', 'images/hair_bob.png' ], hats: [ 'images/hat_crown.png', 'images/hat_santa.png', 'images/hat_tophat.png', 'images/hat_antlers.png' ], glasses: [ 'images/glasses_aviators.png', 'images/glasses_monacle.png' ] } And, there you have it! Randomly generated friends with random accessories. Feel free to go much crazier than I did. I considered adding a whole group of animals in celebration of the new season of Planet Earth, or even adding Sir David Attenborough himself, or doing a bit of role reversal and featuring the animals with little safari hats! But I digress… Step 5: Publish it It’s time to put this in your new tabs! You have two options: Publish it as a Chrome extension in the Chrome Web Store. Host it as a website and point to it with the New Tab Redirect extension. Today, we’re going to cover Option #1 because I want to show you how to make the simplest Chrome extension possible. However, I recommend Option #2 if you want to keep your project private. Every Chrome extension that you publish is made publicly available, so unless your friends want their faces published to an extension that anyone can use, I’d suggest sticking to Option #2. How to make a simple Chrome extension to replace the new tab page All you need to do to make your project into a Chrome extension is add a manifest.json file to the root of your project with the following contents. There are plenty of other properties that you can add to your manifest file, but these are the only ones that are required for a new tab replacement: { "manifest_version": 2, "name": "Your extension name", "version": "1.0", "chrome_url_overrides" : { "newtab": "index.html" } } To test your extension, you’ll need to run it in Developer Mode. Here’s how to do that: Go to the Extensions page in Chrome by navigating to chrome://extensions/. Tick the checkbox in the upper-right corner labelled “Developer Mode”. Click “Load unpacked extension…” and select this project. If everything is running smoothly, you should see your project when you open a new tab. If there are any errors, they should appear in a yellow box on the Extensions page. Voila! Like I said, this is a very light example of a Chrome extension, but Google has tons of great documentation on how to take things further. Check it out and see what inspires you. Share the love Now that you know how to make a new tab extension, go forth and create! But wield your power responsibly. New tabs are opened so often that they’ve become a part of everyday life–just consider how many tabs you opened today. Some people prefer to-do lists in their tabs, and others prefer cats. At the end of the day, let’s make something that makes us happy. Cheers! 2016 Leslie Zacharkow lesliezacharkow 2016-12-08T00:00:00+00:00 https://24ways.org/2016/how-to-make-a-chrome-extension/ code
307 Get the Balance Right: Responsive Display Text Last year in 24 ways I urged you to Get Expressive with Your Typography. I made the case for grabbing your readers’ attention by setting text at display sizes, that is to say big. You should consider very large text in the same way you might a hero image: a picture that creates an atmosphere and anchors your layout. When setting text to be read, it is best practice to choose body and subheading sizes from a pre-defined scale appropriate to the viewport dimensions. We set those sizes using rems, locking the text sizes together so they all scale according to the page default and your reader’s preferences. You can take the same approach with display text by choosing larger sizes from the same scale. However, display text, as defined by its purpose and relative size, is text to be seen first, and read second. In other words a picture of text. When it comes to pictures, you are likely to scale all scene-setting imagery - cover photos, hero images, and so on - relative to the viewport. Take the same approach with display text: lock the size and shape of the text to the screen or browser window. Introducing viewport units With CSS3 came a new set of units which are locked to the viewport. You can use these viewport units wherever you might otherwise use any other unit of length such as pixels, ems or percentage. There are four viewport units, and in each case a value of 1 is equal to 1% of either the viewport width or height as reported in reference1 pixels: vw - viewport width, vh - viewport height, vmin - viewport height or width, whichever is smaller vmax - viewport height or width, whichever is larger In one fell swoop you can set the size of a display heading to be proportional to the screen or browser width, rather than choosing from a scale in a series of media queries. The following makes the heading font size 13% of the viewport width: h1 { font-size: 13 vw; } So for a selection of widths, the rendered font size would be: Rendered font size (px) Viewport width 13 vw 320 42 768 100 1024 133 1280 166 1920 250 A problem with using vw in this manner is the difference in text block proportions between portrait and landscape devices. Because the font size is based on the viewport width, the text on a landscape display is far bigger than when rendered on the same device held in a portrait orientation. Landscape text is much bigger than portrait text when using vw units. The proportions of the display text relative to the screen are so dissimilar that each orientation has its own different character, losing the inconsistency and considered design you would want when designing to make an impression. However if the text was the same size in both orientations, the visual effect would be much more consistent. This where vmin comes into its own. Set the font size using vmin and the size is now set as a proportion of the smallest side of the viewport, giving you a far more consistent rendering. h1 { font-size: 13vmin; } Landscape text is consistent with portrait text when using vmin units. Comparing vw and vmin renderings for various common screen dimensions, you can see how using vmin keeps the text size down to a usable magnitude: Rendered font size (px) Viewport 13 vw 13 vmin 320 × 480 42 42 414 × 736 54 54 768 × 1024 100 100 1024 × 768 133 100 1280 × 720 166 94 1366 × 768 178 100 1440 × 900 187 117 1680 × 1050 218 137 1920 × 1080 250 140 2560 × 1440 333 187 Hybrid font sizing Using vertical media queries to set text in direct proportion to screen dimensions works well when sizing display text. In can be less desirable when sizing supporting text such as sub-headings, which you may not want to scale upwards at the same rate as the display text. For example, we can size a subheading using vmin so that it starts at 16 px on smaller screens and scales up in the same way as the main heading: h1 { font-size: 13vmin; } h2 { font-size: 5vmin; } Using vmin alone for supporting text can scale it too quickly The balance of display text to supporting text on the phone works well, but the subheading text on the tablet, even though it has been increased in line with the main heading, is starting to feel disproportionately large and a little clumsy. This problem becomes magnified on even bigger screens. A solution to this is use a hybrid method of sizing text2. We can use the CSS calc() function to calculate a font size simultaneously based on both rems and viewport units. For example: h2 { font-size: calc(0.5rem + 2.5vmin); } For a 320 px wide screen, the font size will be 16 px, calculated as follows: (0.5 × 16) + (320 × 0.025) = 8 + 8 = 16px For a 768 px wide screen, the font size will be 27 px: (0.5 × 16) + (768 × 0.025) = 8 + 19 = 27px This results in a more balanced subheading that doesn’t take emphasis away from the main heading: To give you an idea of the effect of using a hybrid approach, here’s a side-by-side comparison of hybrid and viewport text sizing: table.ex--scale{width:100%;overflow: hidden;} table.ex--scale td{vertical-align:baseline;text-align:center;padding:0} tr.ex--scale-key{color:#666} tr.ex--scale-key td{font-size:.875rem;padding:0 0.125em} .ex--scale-2 tr.ex--scale-size{color:#ccc} tr.ex--scale-size td{font-size:1em;line-height:.34em;padding-bottom:.5rem} td.ex--scale-step{color:#000} td.ex--scale-hilite{color:red} .ex--scale-3 tr.ex--scale-size td{line-height:.9em} top: calc() hybrid method; bottom: vmin only 16 20 27 32 35 40 44 16 24 38 48 54 64 72 320 480 768 960 1080 1280 1440 Over this festive period, try experiment with the proportion of rem and vmin in your hybrid calculation to see what feels best for your particular setting. A reference pixel is based on the logical resolution of a device which takes into account double density screens such as Retina displays. ↩︎ For even more sophisticated uses of hybrid text sizing see the work of Mike Riethmuller. ↩︎ 2016 Richard Rutter richardrutter 2016-12-09T00:00:00+00:00 https://24ways.org/2016/responsive-display-text/ code
306 What next for CSS Grid Layout? In 2012 I wrote an article for 24 ways detailing a new CSS Specification that had caught my eye, at the time with an implementation only in Internet Explorer. What I didn’t realise at the time was that CSS Grid Layout was to become a theme on which I would base the next four years of research, experimentation, writing and speaking. As I write this article in December 2016, we are looking forward to CSS Grid Layout being shipped in Chrome and Firefox. What will ship early next year in those browsers is expanded and improved from the early implementation I explored in 2012. Over the last four years the spec has been developed as part of the CSS Working Group process, and has had input from browser engineers, specification writers and web developers. Use cases have been discussed, and features added. The CSS Grid Layout specification is now a Candidate Recommendation. This status means the spec is to all intents and purposes, finished. The discussions now happening are on fine implementation details, and not new feature ideas. It makes sense to draw a line under a specification in order that browser vendors can ship complete, interoperable implementations. That approach is good for all of us, it makes development far easier if we know that a browser supports all of the features of a specification, rather than working out which bits are supported. However it doesn’t mean that works stops here, and that new use cases and features can’t be proposed for future levels of Grid Layout. Therefore, in this article I’m going to take a look at some of the things I think grid layout could do in the future. I would love for these thoughts to prompt you to think about how Grid - or any CSS specification - could better suit the use cases you have. Subgrid - the missing feature of Level 1 The implementation of CSS Grid Layout in Chrome, Firefox and Webkit is comparable and very feature complete. There is however one standout feature that has not been implemented in any browser as yet - subgrid. Once you set the value of the display property to grid, any direct children of that element become grid items. This is similar to the way that flexbox behaves, set display: flex and all direct children become flex items. The behaviour does not apply to children of those items. You can nest grids, just as you can nest flex containers, but the child grids have no relationship to the parent. Nesting Grids by Rachel Andrew (@rachelandrew) on CodePen. The subgrid behaviour would enable the grid defined on the parent to be used by the children. I feel this would be most useful when working with a multiple column flexible grid - for example a typical 12 column grid. I could define a grid on a wrapper, then position UI elements on that grid - from the major structural elements of my page down through the child elements to a form where I wanted the field to line up with items above. The specification contained an initial description of subgrid, with a value of subgrid for grid-template-columns and grid-template-rows, you can read about this in the August 2015 Working Draft. This version of the specification would have meant you could declare a subgrid in one dimension only, and create a different set of tracks in the other. In an attempt to get some implementation of subgrid, a revised specification was proposed earlier this year. This gives a single subgrid value of the display property. As we now cannot specify a subgrid on rows OR columns this limits us to have a subgrid that works in two dimensions. At this point neither version has been implemented by anyone, and subgrids are marked as “at risk” in the Level 1 Candidate Recommendation. With regard to ‘at-risk’ this is explained as follows: “‘At-risk’ is a W3C Process term-of-art, and does not necessarily imply that the feature is in danger of being dropped or delayed. It means that the WG believes the feature may have difficulty being interoperably implemented in a timely manner, and marking it as such allows the WG to drop the feature if necessary when transitioning to the Proposed Rec stage, without having to publish a new Candidate Rec without the feature first.” If we lose subgrid from Level 1, as it looks likely that we will, this does give us a chance to further discuss and iterate on that feature. My current thoughts are that I’m not completely happy about subgrids being tied to both dimensions and feel that a return to the earlier version, or something like it, would be preferable. Further reading about subgrid My post from 2015 detailing why I feel subgrid is important My post based on the revised specification Eric Meyer’s thoughts on subgrid Write-up of a discussion from Igalia who work on the Blink and Webkit browser implementations Styling cells, tracks and areas Having defined a grid with CSS Grid Layout you can place child elements into that grid, however what you can’t do is style the grid tracks or cells. Grid doesn’t even go as far as multiple column layout, which has the column-rule properties. In order to set a background colour on a grid cell at the moment you would have to add an empty HTML element or insert some generated content as in the below example. I’m using a 1 pixel grid gap to fake lines between grid cells, and empty div elements, and some generated content to colour those cells. Faked backgrounds and borders by Rachel Andrew (@rachelandrew) on CodePen. I think it would be a nice addition to Grid Layout to be able to directly add backgrounds and borders to cells, tracks and areas. There is an Issue raised in the CSS WG Drafts repository for Decorative Grid Cell pseudo-elements, if you want to add thoughts to that. More control over auto placement If you haven’t explicitly placed the direct children of your grid element they will be laid out according to the grid auto placement rules. You can see in this example how we have created a grid and the items are placing themselves into cells on that grid. Items auto-place on a defined grid by Rachel Andrew (@rachelandrew) on CodePen. The auto-placement algorithm is very cool. We can position some items, leaving others to auto-place; we can set items to span more than one track; we can use the grid-auto-flow property with a value of dense to backfill gaps in our grid. Websafe colors meet CSS Grid (auto-placement demo) by Rachel Andrew (@rachelandrew) on CodePen. I think however this could be taken further. In this issue posted to my CSS Grid AMA on GitHub, the question is raised as to whether it would be possible to ask grid to place items on the next available line of a certain name. This would allow you to skip tracks in the grid when using auto-placement, an issue that has also been raised by Emil Björklund in this post to the www-style list prior to spec discussion moving to Github. I think there are probably similar issues, if you can think of one add a comment here. Creating non-rectangular grid areas A grid area is a collection of grid cells, defined by setting the start and end lines for columns and rows or by creating the area in the value of the grid-template-areas property as shown below. Those areas however must be rectangular - you can’t create an L-shaped or otherwise non-regular shape. Grid Areas by Rachel Andrew (@rachelandrew) on CodePen. Perhaps in the future we could define an L-shape or other non-rectangular area into which content could flow, as in the below currently invalid code where a quote is embedded into an L-shaped content area. .wrapper { display: grid; grid-template-areas: "sidebar header header" "sidebar content quote" "sidebar content content"; } Flowing content through grid cells or areas Some uses cases I have seen perhaps are not best solved by grid layout at all, but would involve grid working alongside other CSS specifications. As I detail in this post, there are a class of problems that I believe could be solved with the CSS Regions specification, or a revised version of that spec. Being able to create a grid layout, then flow content through the areas could be very useful. Jen Simmons presented to the CSS Working Group at the Lisbon meeting a suggestion as to how this might work. In a post from earlier this year I looked at a collection of ideas from specifications that include Grid, Regions and Exclusions. These working notes from my own explorations might prompt ideas of your own. Solving the keyboard/layout disconnect One issue that grid, and flexbox to a lesser extent, raises is that it is very easy to end up with a layout that is disconnected from the underlying markup. This raises problems for people navigating using the keyboard as when tabbing around the document you find yourself jumping to unexpected places. The problem is explained by Léonie Watson with reference to flexbox in Flexbox and the keyboard navigation disconnect. The grid layout specification currently warns against creating such a disconnect, however I think it will take careful work by web developers in order to prevent this. It’s also not always as straightforward as it seems. In some cases you want the logical order to follow the source, and others it would make more sense to follow the visual. People are thinking about this issue, as you can read in this mailing list discussion. Bringing your ideas to the future of Grid Layout When I’m not getting excited about new CSS features, my day job involves working on a software product - the CMS that is serving this very website, Perch. When we launched Perch there were many use cases that we had never thought of, despite having a good idea of what might be needed in a CMS and thinking through lots of use cases. The additional use cases brought to our attention by our customers and potential customers informed the development of the product from launch. The same will be true for Grid Layout. As a “product” grid has been well thought through by many people. Yet however hard we try there will be use cases we just didn’t think of. You may well have one in mind right now. That’s ok, because as with any CSS specification, once Level One of grid is complete, work can begin on Level Two. The feature set of Level Two will be informed by the use cases that emerge as people get to grips with what we have now. This is where you get to contribute to the future of layout on the web. When you hit up against the things you cannot do, don’t just mutter about how the CSS Working Group don’t listen to regular developers and code around the problem. Instead, take a few minutes and write up your use case. Post it to your blog, to Medium, create a CodePen and go to the CSS Working Group GitHub specs repository and post an issue there. Write some pseudo-code, draw a picture, just make sure that the use case is described in enough detail that someone can see what problem you want grid to solve. It may be that - as with any software development - your use case can’t be solved in exactly the way you suggest. However once we have a use case, collected with other use cases, methods of addressing that class of problems can be investigated. I opened this article by explaining I’d written about grid layout four years ago, and how we’re only now at a point where we will have Grid Layout available in the majority of browsers. Specification development, and implementation into browsers takes time. This is actually a good thing, as it’s impossible to take back CSS once it is out there and being used by production websites. We want CSS in the wild to be well thought through and that takes time. So don’t feel that because you don’t see your use case added to a spec immediately it has been ignored. Do your future self a favour and write down your frustrations or thoughts, and we can all make sure that the web platform serves the use cases we’re dealing with now and in the future. 2016 Rachel Andrew rachelandrew 2016-12-12T00:00:00+00:00 https://24ways.org/2016/what-next-for-css-grid-layout/ code
305 CSS Writing Modes Since you may not have a lot of time, I’m going to start at the end, with the dessert. You can use a little-known, yet important and powerful CSS property to make text run vertically. Like this. Or instead of running text vertically, you can layout a set of icons or interface buttons in this way. Or, of course, with anything on your page. The CSS I’ve applied makes the browser rethink the orientation of the world, and flow the layout of this element at a 90° angle to “normal”. Check out the live demo, highlight the headline, and see how the cursor is now sideways. See the Pen Writing Mode Demo — Headline by Jen Simmons (@jensimmons) on CodePen. The code for accomplishing this is pretty simple. h1 { writing-mode: vertical-rl; } That’s all it takes to switch the writing mode from the web’s default horizontal top-to-bottom mode to a vertical right-to-left mode. If you apply such code to the html element, the entire page is switched, affecting the scroll direction, too. In my example above, I’m telling the browser that only the h1 will be in this vertical-rl mode, while the rest of my page stays in the default of horizontal-tb. So now the dessert course is over. Let me serve up this whole meal, and explain the the CSS Writing Mode Specification. Why learn about writing modes? There are three reasons I’m teaching writing modes to everyone—including western audiences—and explaining the whole system, instead of quickly showing you a simple trick. We live in a big, diverse world, and learning about other languages is fascinating. Many of you lay out pages in languages like Chinese, Japanese and Korean. Or you might be inspired to in the future. Using writing-mode to turn bits sideways is cool. This CSS can be used in all kinds of creative ways, even if you are working only in English. Most importantly, I’ve found understanding Writing Modes incredibly helpful when understanding Flexbox and CSS Grid. Before I learned Writing Mode, I felt like there was still a big hole in my knowledge, something I just didn’t get about why Grid and Flexbox work the way they do. Once I wrapped my head around Writing Modes, Grid and Flexbox got a lot easier. Suddenly the Alignment properties, align-* and justify-*, made sense. Whether you know about it or not, the writing mode is the first building block of every layout we create. You can do what we’ve been doing for 25 years – and leave your page set to the default left-to-right direction, horizontal top-to-bottom writing mode. Or you can enter a world of new possibilities where content flows in other directions. CSS properties I’m going to focus on the CSS writing-mode property in this article. It has five possible options: writing-mode: horizontal-tb; writing-mode: vertical-rl; writing-mode: vertical-lr; writing-mode: sideways-rl; writing-mode: sideways-lr; The CSS Writing Modes Specification is designed to support a wide range of written languages in all our human and linguistic complexity. Which—spoiler alert—is pretty insanely complex. The global evolution of written languages has been anything but simple. So I’ve got to start with explaining some basic concepts of web page layout and writing systems. Then I can show you what these CSS properties do. Inline Direction, Block Direction, and Character Direction In the world of the web, there’s a concept of ‘block’ and ‘inline’ layout. If you’ve ever written display: block or display: inline, you’ve leaned on these concepts. In the default writing mode, blocks stack vertically starting at the top of the page and working their way down. Think of how a bunch of block-levels elements stack—like a bunch of a paragraphs—that’s the block direction. Inline is how each line of text flows. The default on the web is from left to right, in horizontal lines. Imagine this text that you are reading right now, being typed out one character at a time on a typewriter. That’s the inline direction. The character direction is which way the characters point. If you type a capital “A” for instance, on which side is the top of the letter? Different languages can point in different directions. Most languages have their characters pointing towards the top of the page, but not all. Put all three together, and you start to see how they work as a system. The default settings for the web work like this. Now that we know what block, inline, and character directions mean, let’s see how they are used in different writing systems from around the world. The four writing systems of CSS Writing Modes The CSS Writing Modes Specification handles all the use cases for four major writing systems; Latin, Arabic, Han and Mongolian. Latin-based systems One writing system dominates the world more than any other, reportedly covering about 70% of the world’s population. The text is horizontal, running from left to right, or LTR. The block direction runs from top to bottom. It’s called the Latin-based system because it includes all languages that use the Latin alphabet, including English, Spanish, German, French, and many others. But there are many non-Latin-alphabet languages that also use this system, including Greek, Cyrillic (Russian, Ukrainian, Bulgarian, Serbian, etc.), and Brahmic scripts (Devanagari, Thai, Tibetan), and many more. You don’t need to do anything in your CSS to trigger this mode. This is the default. Best practices, however, dictate that you declare in your opening <html> element which language and which direction (LTR or RTL) you are using. This website, for instance, uses <html lang='en-gb' dir='ltr'> to let the browser know this content is published in Great Britian’s version of English, in a left to right direction. Arabic-based systems Arabic, Hebrew and a few other languages run the inline direction from right to left. This is commonly known as RTL. Note that the inline direction still runs horizontally. The block direction runs from top to bottom. And the characters are upright. It’s not just the flow of text that runs from right to left, but everything about the layout of the website. The upper right-hand corner is the starting position. Important things are on the right. The eyes travel from right to left. So, typically RTL websites use layouts that are just like LTR websites, only flipped. On websites that support both LTR and RTL, like the United Nations’ site at un.org, the two layouts are mirror images of each other. For many web developers, our experiences with internationalization have focused solely on supporting Arabic and Hebrew script. CSS layout hacks for internationalization & RTL To prepare an LTR project to support RTL, developers have had to create all sorts of hacks. For example, the Drupal community started a convention of marking every margin-left and -right, every padding-left and -right, every float: left and float: right with the comment /* LTR */. Then later developers could search for each instance of that exact comment, and create stylesheets to override each left with right, and vice versa. It’s a tedious and error prone way to work. CSS itself needed a better way to let web developers write their layout code once, and easily switch language directions with a single command. Our new CSS layout system does exactly that. Flexbox, Grid and Alignment use start and end instead of left and right. This lets us define everything in relationship to the writing system, and switch directions easily. By writing justify-content: flex-start, justify-items: end, and eventually margin-inline-start: 1rem we have code that doesn’t need to be changed. This is a much better way to work. I know it can be confusing to think through start and end as replacements for left and right. But it’s better for any multiligual project, and it’s better for the web as a whole. Sadly, I’ve seen CSS preprocessor tools that claim to “fix” the new CSS layout system by getting rid of start and end and bringing back left and right. They want you to use their tool, write justify-content: left, and feel self-righteous. It seems some folks think the new way of working is broken and should be discarded. It was created, however, to fulfill real needs. And to reflect a global internet. As Bruce Lawson says, WWW stands for the World Wide Web, not the Wealthy Western Web. Please don’t try to convince the industry that there’s something wrong with no longer being biased towards western culture. Instead, spread the word about why this new system is here. Spend a bit of time drilling the concept of inline and block into your head, and getting used to start and end. It will be second nature soon enough. I’ve also seen CSS preprocessors that let us use this new way of thinking today, even as all the parts aren’t fully supported by browsers yet. Some tools let you write text-align: start instead of text-align: left, and let the preprocessor handle things for you. That is terrific, in my opinion. A great use of the power of a preprocessor to help us switch over now. But let’s get back to RTL. How to declare your direction You don’t want to use CSS to tell the browser to switch from an LTR language to RTL. You want to do this in your HTML. That way the browser has the information it needs to display the document even if the CSS doesn’t load. This is accomplished mainly on the html element. You should also declare your main language. As I mentioned above, the 24 ways website is using <html lang='en-gb' dir='ltr'> to declare the LTR direction and the use of British English. The UN Arabic website uses <html lang='ar' dir='rtl'>to declare the site as an Arabic site, using a RTL layout. Things get more complicated when you’ve got a page with a mix of languages. But I’m not going to get into all of that, since this article is focused on CSS and layouts, not explaining everything about internationalization. Let me just leave direction here by noting that much of the heavy work of laying out the characters which make up each word is handled by Unicode. If you are interested in learning more about LTR, RTL and bidirectional text, watch this video: Introduction to Bidirectional Text, a presentation by Elika Etemad. Meanwhile, let’s get back to CSS. The writing mode CSS for Latin-based and Arabic-based systems For both of these systems—Latin-based and Arabic-based, whether LTR or RTL—the same CSS property applies for specifying the writing mode: writing-mode: horizontal-tb. That’s because in both systems, the inline text flow is horizontal, while the block direction is top-to-bottom. This is expressed as horizontal-tb. horizontal-tb is the default writing mode for the web, so you don’t need to specify it unless you are overriding something else higher up in the cascade. You can just imagine that every site you’ve ever built came with: html { writing-mode: horizontal-tb; } Now let’s turn our attention to the vertical writing systems. Han-based systems This is where things start to get interesting. Han-based writing systems include CJK languages, Chinese, Japanese, Korean and others. There are two options for laying out a page, and sometimes both are used at the same time. Much of CJK text is laid out like Latin-based languages, with a horizontal top-to-bottom block direction, and a left-to-right inline direction. This is the more modern way to doing things, started in the 20th century in many places, and further pushed into domination by the computer and later the web. The CSS to do this bit of the layouts is the same as above: section { writing-mode: horizontal-tb; } Or, you know, do nothing, and get that result as a default. Alternatively Han-based languages can be laid out in a vertical writing mode, where the inline direction runs vertically, and the block direction goes from right to left. See both options in this diagram: Note that the horizontal text flows from left to right, while the vertical text flows from right to left. Wild, eh? This Japanese issue of Vogue magazine is using a mix of writing modes. The cover opens on the left spine, opposite of what an English magazine does. This page mixes English and Japanese, and typesets the Japanese text in both horizontal and vertical modes. Under the title “Richard Stark” in red, you can see a passage that’s horizontal-tb and LTR, while the longer passage of text at the bottom of the page is typeset vertical-rl. The red enlarged cap marks the beginning of that passage. The long headline above the vertical text is typeset LTR, horizontal-tb. The details of how to set the default of the whole page will depend on your use case. But each element, each headline, each section, each article can be marked to flow the opposite of the default however you’d like. For example, perhaps you leave the default as horizontal-tb, and specify your vertical elements like this: div.articletext { writing-mode: vertical-rl; } Or alternatively you could change the default for the page to a vertical orientation, and then set specific elements to horizontal-tb, like this: html { writing-mode: vertical-rl; } h2, .photocaptions, section { writing-mode: horizontal-tb; } If your page has a sideways scroll, then the writing mode will determine whether the page loads with upper left corner as the starting point, and scroll to the right (horizontal-tb as we are used to), or if the page loads with the upper right corner as the starting point, scrolling to the left to display overflow. Here’s an example of that change in scrolling direction, in a CSS Writing Mode demo by Chen Hui Jing. Check out her demo — you can switch from horizontal to vertical writing modes with a checkbox and see the difference. Mongolian-based systems Now, hopefully so far all of this kind of makes sense. It might be a bit more complicated than expected, but it’s not so hard. Well, enter the Mongolian-based systems. Mongolian is also a vertical script language. Text runs vertically down the page. Just like Han-based systems. There are two major differences. First, the block direction runs the other way. In Mongolian, block-level elements stack from left to right. Here’s a drawing of how Wikipedia would look in Mongolian if it were laid out correctly. Perhaps the Mongolian version of Wikipedia will be redone with this layout. Now you might think, that doesn’t look so weird. Tilt your head to the left, and it’s very familiar. The block direction starts on the left side of the screen and goes to the right. The inline direction starts on the top of the page and moves to the bottom (similar to RTL text, just turned 90° counter-clockwise). But here comes the other huge difference. The character direction is “upside down”. The top of the Mongolian characters are not pointing to the left, towards the start edge of the block direction. They point to the right. Like this: Now you might be tempted to ignore all this. Perhaps you don’t expect to be typesetting Mongolian content anytime soon. But here’s why this is important for everyone — the way Mongolian works defines the results writing-mode: vertical-lr. And it means we cannot use vertical-lr for typesetting content in other languages in the way we might otherwise expect. If we took what we know about vertical-rl and guessed how vertical-lr works, we might imagine this: But that’s wrong. Here’s how they actually compare: See the unexpected situation? In both writing-mode: vertical-rl and writing-mode: vertical-lr latin text is rotated clockwise. Neither writing mode let’s us rotate text counter-clockwise. If you are typesetting Mongolian content, apply this CSS in the same way you would apply writing-mode to Han-based writing systems. To the whole page on the html element, or to specific pages of the page like this: section { writing-mode: vertical-lr; } Now, if you are using writing-mode for a graphic design effect on a language that is otherwise typesets horizontally, I don’t think writing-mode: vertical-lr is useful. If the text wraps onto two lines, it stacks in a very unexpected way. So I’ve sort of obliterated it from my toolkit. I find myself using writing-mode: vertical-rl a lot. And never using -lr. Hm. Writing modes for graphic design So how do we use writing-mode to turn English headlines sideways? We could rely on transform: rotate() Here are two examples, one for each direction. (By the way, each of these demos use CSS Grid for their overall layout, so be sure to test them in a browser that supports CSS Grid, like Firefox Nightly.) In this demo 4A, the text is rotated clockwise using this code: h1 { writing-mode: vertical-rl; } In this demo 4B, the text is rotated counter-clockwise using this code: h1 { writing-mode: vertical-rl; transform: rotate(180deg); text-align: right; } I use vertical-rl to rotate the text so that it takes up the proper amount of space in the overall flow of the layout. Then I rotate it 180° to spin it around to the other direction. And then I use text-align: right to get it to rise up to the top of it’s container. This feels like a hack, but it’s a hack that works. Now what I would like to do instead is use another CSS value that was designed for this use case — one of the two other options for writing mode. If I could, I would lay out example 4A with: h1 { writing-mode: sideways-rl; } And layout example 4B with: h1 { writing-mode: sideways-lr; } The problem is that these two values are only supported in Firefox. None of the other browsers recognize sideways-*. Which means we can’t really use it yet. In general, the writing-mode property is very well supported across browsers. So I’ll use writing-mode: vertical-rl for now, with the transform: rotate(180deg); hack to fake the other direction. There’s much more to what we can do with the CSS designed to support multiple languages, but I’m going to stop with this intermediate introduction. If you do want a bit more of a taste, look at this example that adds text-orientation: upright; to the mix — turning the individual letters of the latin font to be upright instead of sideways. It’s this demo 4C, with this CSS applied: h1 { writing-mode: vertical-rl; text-orientation: upright; text-transform: uppercase; letter-spacing: -25px; } You can check out all my Writing Modes demos at labs.jensimmons.com/#writing-modes. I’ll leave you with this last demo. One that applies a vertical writing mode to the sub headlines of a long article. I like how small details like this can really bring a fresh feeling to the content. See the Pen Writing Mode Demo — Article Subheadlines by Jen Simmons (@jensimmons) on CodePen. 2016 Jen Simmons jensimmons 2016-12-23T00:00:00+00:00 https://24ways.org/2016/css-writing-modes/ code
304 Five Lessons From My First 18 Months as a Dev I recently moved from Sydney to London to start a dream job with Twitter as a software engineer. A software engineer! Who would have thought. Having started my career as a journalist, the title ‘engineer’ is very strange to me. The notion of writing in first person is also very strange. Journalists are taught to be objective, invisible, to keep yourself out of the story. And here I am writing about myself on a public platform. Cringe. Since I started learning to code I’ve often felt compelled to write about my experience. I want to share my excitement and struggles with the world! But as a junior I’ve been held back by thoughts like ‘whatever you have to say won’t be technical enough’, ‘any time spent writing a blog would be better spent writing code’, ‘blogging is narcissistic’, etc.  Well, I’ve been told that your thirties are the years where you stop caring so much about what other people think. And I’m almost 30. So here goes! These are five key lessons from my first year and a half in tech: Deployments should delight, not dread Lesson #1: Making your deployment process as simple as possible is worth the investment. In my first dev job, I dreaded deployments. We would deploy every Sunday night at 8pm. Preparation would begin the Friday before. A nominated deployment manager would spend half a day tagging master, generating scripts, writing documentation and raising JIRAs. The only fun part was choosing a train gif to post in HipChat: ‘All aboard! The deployment train leaves in 3, 2, 1…” When Sunday night came around, at least one person from every squad would need to be online to conduct smoke tests. Most times, the deployments would succeed. Other times they would fail. Regardless, deployments ate into people’s weekend time — and they were intense. Devs would rush to have their code approved before the Friday cutoff. Deployment managers who were new to the process would fear making a mistake.  The team knew deployments were a problem. They were constantly striving to improve them. And what I’ve learnt from Twitter is that when they do, their lives will be bliss. TweetDeck’s deployment process fills me with joy and delight. It’s quick, easy and stress free. In fact, it’s so easy I deployed code on my first day in the job! Anyone can deploy, at any time of day, with a single command. Rollbacks are just as simple. There’s no rush to make the deployment train. No manual preparation. No fuss. Value — whether in the form of big new features, simple UI improvements or even production bug fixes — can be shipped in an instant. The team assures me the process wasn’t always like this. They invested lots of time in making their deployments better. And it’s clearly paid off. Code reviews need love, time and acceptance Lesson #2: Code reviews are a three-way gift. Every time I review someone else’s code, I help them, the team and myself. Code reviews were another pain point in my previous job. And to be honest, I was part of the problem. I would raise code reviews that were far too big. They would take days, sometimes weeks, to get merged. One of my reviews had 96 comments! I would rarely review other people’s code because I felt too junior, like my review didn’t carry any weight.  The review process itself was also tiring, and was often raised in retrospectives as being slow. In order for code to be merged it needed to have ticks of approval from two developers and a third tick from a peer tester. It was the responsibility of the author to assign the reviewers and tester. It was felt that if it was left to team members to assign themselves to reviews, the “someone else will do it” mentality would kick in, and nothing would get done. At TweetDeck, no-one is specifically assigned to reviews. Instead, when a review is raised, the entire team is notified. Without fail, someone will jump on it. Reviews are seen as blocking. They’re seen to be equally, if not more important, than your own work. I haven’t seen a review sit for longer than a few hours without comments.  We also don’t work on branches. We push single commits for review, which are then merged to master. This forces the team to work in small, incremental changes. If a review is too big, or if it’s going to take up more than an hour of someone’s time, it will be sent back. What I’ve learnt so far at Twitter is that code reviews must be small. They must take priority. And they must be a team effort. Being a new starter is no “get out of jail free card”. In fact, it’s even more of a reason to be reviewing code. Reviews are a great way to learn, get across the product and see different programming styles. If you’re like me, and find code reviews daunting, ask to pair with a senior until you feel more confident. I recently paired with my mentor at Twitter and found it really helpful. Get friendly with feature flagging Lesson #3: Feature flagging gives you complete control over how you build and release a project. Say you’re implementing a new feature. It’s going to take a few weeks to complete. You’ll complete the feature in small, incremental changes. At what point do these changes get merged to master? At what point do they get deployed? Do you start at the back end and finish with the UI, so the user won’t see the changes until they’re ready? With feature flagging — it doesn’t matter. In fact, with feature flagging, by the time you are ready to release your feature, it’s already deployed, sitting happily in master with the rest of your codebase.  A feature flag is a boolean value that gets wrapped around the code relating to the thing you’re working on. The code will only be executed if the value is true. if (TD.decider.get(‘new_feature’)) { //code for new feature goes here } In my first dev job, I deployed a navigation link to the feature I’d been working on, making it visible in the product, even though the feature wasn’t ready. “Why didn’t you use a feature flag?” a senior dev asked me. An honest response would have been: “Because they’re confusing to implement and I don’t understand the benefits of using them.” The fix had to wait until the next deployment. The best thing about feature flagging at TweetDeck is that there is no need to deploy to turn on or off a feature. We set the status of the feature via an interface called Deckcider, and the code makes regular API requests to get the status.  At TweetDeck we are also able to roll our features out progressively. The first rollout might be to a staging environment. Then to employees only. Then to 10 per cent of users, 20 per cent, 30 per cent, and so on. A gradual rollout allows you to monitor for bugs and unexpected behaviour, before releasing the feature to the entire user base. Sometimes a piece of work requires changes to existing business logic. So the code might look more like this: if (TD.decider.get(‘change_to_existing_feature’)) { //new logic goes here } else { //old logic goes here } This seems messy, right? Riddling your code with if else statements to determine which path of logic should be executed, or which version of the UI should be displayed. But at Twitter, this is embraced. You can always clean up the code once a feature is turned on. This isn’t essential, though. At least not in the early days. When a cheeky bug is discovered, having the flag in place allows the feature to be very quickly turned off again. Let data and experimentation drive development Lesson #4: Use data to determine the direction of your product and measure its success. The first company I worked for placed a huge amount of emphasis on data-driven decision making. If we had an idea, or if we wanted to make a change, we were encouraged to “bring data” to show why it was necessary. “Without data, you’re just another person with an opinion,” the chief data scientist would say. This attitude helped to ensure we were building the right things for our customers. Instead of just plucking a new feature out of thin air, it was chosen based on data that reflected its need. But how do you design that feature? How do you know that the design you choose will have the desired impact? That’s where experiments come into play.  At TweetDeck we make UI changes that we hope will delight our users. But the assumptions we make about our users are often wrong. Our front-end team recently sat in a room and tried to guess which UIs from A/B tests had produced better results. Half the room guessed incorrectly every time. We can’t assume a change we want to make will have the impact we expect. So we run an experiment. Here’s how it works. Users are placed into buckets. One bucket of users will have access to the new feature, the other won’t. We hypothesise that the bucket exposed to the new feature will have better results. The beauty of running an experiment is that we’ll know for sure. Instead of blindly releasing the feature to all users without knowing its impact, once the experiment has run its course, we’ll have the data to make decisions accordingly. Hire the developer, not the degree Lesson #5: Testing candidates on real world problems will allow applicants from all backgrounds to shine. Surely, a company like Twitter would give their applicants insanely difficult code tests, and the toughest technical questions, that only the cleverest CS graduates could pass, I told myself when applying for the job. Lucky for me, this wasn’t the case. The process was insanely difficult—don’t get me wrong—but the team at TweetDeck gave me real world problems to solve. The first code test involved bug fixes, performance and testing. The second involved DOM traversal and manipulation. Instead of being put on the spot in a room with a whiteboard and pen I was given a task, access to the internet, and time to work on it. Similarly, in my technical interviews, I was asked to pair program on real world problems that I was likely to face on the job. In one of my phone screenings I was told Twitter wanted to increase diversity in its teams. Not just gender diversity, but also diversity of experience and background. Six months later, with a bunch of new hires, team lead Tom Ashworth says TweetDeck has the most diverse team it’s ever had. “We designed an interview process that gave us a way to simulate the actual job,” he said. “It’s not about testing whether you learnt an algorithm in school.” Is this lowering the bar? No. The bar is whether a candidate has the ability to solve problems they are likely to face on the job. I recently spoke to a longstanding Atlassian engineer who said they hadn’t seen an algorithm in their seven years at the company. These days, only about 50 per cent of developers have computer science degrees. The majority of developers are self taught, learn on the job or via online courses. If you want to increase diversity in your engineering team, ensure your interview process isn’t excluding these people. 2016 Amy Simmons amysimmons 2016-12-20T00:00:00+00:00 https://24ways.org/2016/my-first-18-months-as-a-dev/ process
303 We Need to Talk About Technical Debt In my work with clients, a lot of time is spent assessing old, legacy, sprawling systems and identifying good code, bad code, and technical debt. One thing that constantly strikes me is the frequency with which bad code and technical debt are conflated, so let me start by saying this: Not all technical debt is bad code, and not all bad code is technical debt. Sometimes your bad code is just that: bad code. Calling it technical debt often feels like a more forgiving and friendly way of referring to what may have just been a poor implementation or a substandard piece of work. It is an oft-misunderstood phrase, and when mistaken for meaning ‘anything legacy or old hacky or nasty or bad’, technical debt is swept under the carpet along with all of the other parts of the codebase we’d rather not talk about, and therein lies the problem. We need to talk about technical debt. What We Talk About When We Talk About Technical Debt The thing that separates technical debt from the rest of the hacky code in our project is the fact that technical debt, by definition, is something that we knowingly and strategically entered into. Debt doesn’t happen by accident: debt happens when we choose to gain something otherwise-unattainable immediately in return for paying it back (with interest) later on. An Example You’re a front-end developer working on a SaaS product, and your sales team is courting a large customer – a customer so large that you can’t really afford to lose them. The customer tells you that as long as you can allow them to theme your SaaS application according to their branding, they are willing to sign on the dotted line… the problem being that your CSS architecture was never designed to incorporate theming at all, and there isn’t currently a nice, clean way to incorporate a theme into the codebase. You and the business make the decision that you will hack a theme into the product in two days. It’s going to be messy, it’s going to be ugly, but you can’t afford to lose a huge customer just because your CSS isn’t quite right, right now. This is technical debt. You deliver the theme, the customer signs up, and everyone is happy. Except you (and the business, because you are one and the same) have a decision to make: Do we go back and build theming into the CSS architecture as a first-class citizen, porting the hacked theme back into a codified and formal framework? Do we carry on as we are? Things are working okay, and the customer paid up, so is there any reason to invest time and effort into things after we (and the customer) got what we wanted? Option 1 is choosing to pay off your debts; Option 2 is ignoring your repayments. With Option 1, you’re acknowledging that you did what you could given the constraints, but, free of constraints, you’d have done something different. Now, you are choosing to implement that something different. With Option 2, however, you are avoiding your responsibility to repay your debt, and you are letting interest accrue. The problem here is that… your SaaS product now offers theming to one of your customers; another potential customer might also demand the ability to theme their instance of your product; you can’t refuse them that request, nor can you quickly fulfil it; you hack in another theme, thus adding to the balance of your existing debt; and so on (plus interest) for every subsequent theme you need to implement. Here you have increased entropy whilst making little to no attempt to address what you already knew to be problems. Your second, third, fourth, fifth request for theming will be hacked on top of your hack, further accumulating debt whilst offering nothing by way of a repayment. After a long enough period, the code involved will get so unwieldy, so hard to work with, that you are forced to tear it all down and start again, and the most painful part of this is that you’re actually paying off even more than your debt repayments would have been in the first place. Two days of hacking plus, say, five days of subsequent refactoring, would still have been substantially less than the weeks you will now have to spend rewriting your CSS to fix and incorporate the themes properly. You’ve made a loss; your strategic debt ultimately became a loss-making exercise. The important thing to note here is that you didn’t necessarily write bad code. You knew there were two options: the quick way and the correct way. The decision to take the quick route was a definite choice, because you knew there was a better way. Implementing the better way is your repayment. Good Debt and Bad Debt Technical debt is acceptable as long as you have intentions to settle; it can be a valuable solution to a business problem, provided the right approach is taken afterwards. That doesn’t, however, mean that all debt is born equal. Just as in real life, there is good debt and there is bad debt. Good debt might be… a mortgage; a student loan, or; a business loan. These are types of debt that will secure you the means of repaying them. These are well considered debts whose very reason for being will allow you to make the money to pay them off—they have real, tangible benefit. A business loan to secure some equipment and premises will allow you to start an enterprise whose revenue will allow you to pay that debt back; a student loan will allow you to secure the kind of job that has the ability to pay a student loan back. These kinds of debt involve a considered and well-balanced decision to acquire something in the short term in the knowledge that you will have the means, in the long term, to pay it back. Conversely, bad debt might be… borrowing $1,000 from a loan shark so you can go to Vegas, or; taking out a payday loan in order to buy a new television. Both of these kinds of debt will leave you paying for things that didn’t provide you a way of earning your own capital. That is to say, the loans taken did not secure anything that would help pay off said loans. These are bad debts that will usually provide a net loss. You really are only gaining the short term in exchange for a long term financial responsibility: i.e., was it worth it? A good litmus test for debt is to compare the gains of its immediate benefit with the cost of its long term commitment. The earlier example of theming a site is a good debt, provided we are keeping up our repayments (all debt is bad debt if you don’t). A calculated decision to do something ‘wrong’ in the short term with the promise of better payoffs later on. Bad Technical Debt The majority of my work is with front-end development teams—CSS is what I do. To that end, the most succinct example of technical debt for that audience is simply: !important All front-end developers know the horrors and dangers associated with using !important, yet we continue to use it. Why? It’s not necessarily because we’re bad developers, but because we see a shortcut. !important is usually implemented as a quick way out of a sticky specificity situation. We could spend the rest of the day refactoring our CSS to fix the issue at its source, or we can spend mere seconds typing the word !important and patch over the symptoms. This is us making an explicit decision to do something less than ideal now in exchange for immediate benefit. After all, refactoring our CSS will take a lot more time, and will still only leave us with the same outcome that the vastly quicker !important solution will, so it seems to make better business sense. However, this is a bad debt. !important takes seconds to implement but weeks to refactor. The cost of refactoring this back out later will be an order of magnitude higher than it would be to have done things properly the first time. The first !important usually sets a precedent, and subsequent developers are likely to have to use it themselves in order to get around the one that you left. So many CSS projects deteriorate because of this one simple word, and rewrites become more and more imminent. That makes it possibly the most costly 10 bytes a CSS developer could ever write. Bad Code Now we’ve got a good idea of what constitutes technical debt, let’s take a look at what constitutes bad code. Something I hear time and time again in my client work goes a little like this: We’ve amassed a lot of technical debt and we’d like to get a strategy in place to begin dealing with it. Whilst I genuinely admire their willingness to identify and desire to fix problems in their code, sometimes they’re not looking at technical debt at all—sometimes they’re just looking at bad code, plain and simple. Where technical debt is knowing that there’s a better way, but the quicker way makes more sense right now, bad code is not caring if there’s a better way at all. Again, looking at a CSS-specific world, a lot of bad code is contributed by non-front-end developers with little training, appreciation, or even respect for the front-end landscape. Writing code with reckless abandon should not be described as technical debt, because to do so would imply that… the developers knew they were implementing a sub-par solution, but… the developers also knew that a better solution was out there, which… implies that it can be tidied up relatively simply. Developers writing bad code is a larger and more cultural problem that requires a lot more effort to fix. Hopefully—and usually—bad code is in the minority, but it helps to be objective in identifying and solving it. Bad code usually doesn’t happen for a good enough reason, and is therefore much harder to justify. Technical debt often represents ability in judgement, whereas bad code often represents a gap in skills. Takeaway Take time to familiarise yourself with the true concepts underlying technical debt and why it exists. Understand that technical debt can be good or bad. Admit that sometimes code is just of poor quality. Understanding these points will allow you to make better calls around what you might need to refactor and when, and what skills gaps you might have in your team. Sometimes it’s okay to cut corners if there is a tangible gain to be had in the immediate term. Technical debt is okay provided it is a sensible debt and you have intentions to pay it off. Technical debt is not necessarily synonymous with bad code, and bad code isn’t necessarily technical debt. Technical debt is code that was implemented given limited knowledge or resource, with the understanding that you would need to repay something in future. Technical debt is not inherently bad—failure to make repayments is. Periodically, it is justifiable—encouraged, even—to enter a debt in order to fulfil a more pressing matter. However, it is imperative that we begin making repayments as soon as we are capable, be that based on newly available time or knowledge. Bad code is worse than technical debt as it represents a lack of knowledge or quality control within a team. It needs a much more fundamental fix. 2016 Harry Roberts harryroberts 2016-12-05T00:00:00+00:00 https://24ways.org/2016/we-need-to-talk-about-technical-debt/ code
302 Flexible Project Management in Inflexible Environments Handling unforeseen circumstances is an inevitable part of any project. It’s also often the most uncomfortable, and there is no amount of skill or planning that will fully eradicate the need to adapt to change. The ability to be flexible, responsive, and unafraid of facing not only problems, but also potentially positive scope changes and new ideas, isn’t an easy one to master. I am by no means saying that I have, but what I have learned is that there is often the temptation to shut out anything that might derail your plan, even sometimes at the cost of the quality you’re committed to. The reality is that as someone leading a project you know there will be challenges, but, in general, it’s a hassle to try keep the landscape open. Problems are bridges we should cross when we come to them, but intentional changes to the plan, and adapting for the sake of improving your first idea, is harder. There are tight schedules, resource is planned miles ahead, and you’re already juggling twenty other things. If you’re passionate about the quality of work you deliver and are working somewhere that considers itself expert within the field of digital, then having an attitude of flexibility is extremely important. It’s important when you’re overcoming a challenge or problem, but it’s also important for allowing ideas to evolve and be refined as much as they can be throughout the course of a project. Where theory falls short The premise of any Agile methodology, Scrum for example, is based around being able to work efficiently, react quickly and deliver relevant chunks of a product in manageable increments. It’s often hailed as king of flexible management and it can work really well, especially for in-house software products developed over a long or even an indefinite period of time. It holds off defining scope too far ahead and lets teams focus on smaller amounts of work, and allows them to regularly reprioritise. Unfortunately though, not all environments lend themselves as easily to a fully Agile setup. Even the ones that do may be restrained from putting it fully into practice for an array of other internal reasons. Delivering digital services to clients—within an agency setting or as a freelancer—often demands a more rigid structure. You need clear sign-off points, there’s a lot less flexibility in defining features, or working within budgets and timeframes. To start with, for a project to warrant a fully Agile team working on it, and especially for agencies, you need clients big enough and rich enough to justify the resource. You also need a lot of client trust to propose defining features and scope as you go. Although this is achievable—and there are agencies that operate an agile setup—it takes a long journey to reach that scale in the full sense of the word. Building a reputation that commands unconditional trust and reaching the point where your projects are consistently of a certain size often requires backing by long journey of success and excellence. So there is a lot of room left for understanding how we can best strive to still deliver excellent projects within more constrained structures. We know that rigid waterfall planning, more often than not, falls over as soon as a project gets anything past a basic brochure site. There are many critiques of the system, but one of the main ones tends to be that nobody considers each other’s work properly, which can result in very expensive and inefficient development. Equally, for reasons we’ve already touched upon, running fully agile teams often isn’t the right answer. So many companies, individuals, and organisations look for a middle-ground that balances being flexible and adaptive, but also provides enough upfront commitment to agree budgets, get client/stakeholder sign off, and effectively coordinate internal resource across multiple parallel projects. Although I don’t have a perfect formula—and can very much assert there is no one perfect way of managing a project because every project is different to the next—I’ve identified a few different ways you can approach flexibility that have really helped me in running projects more smoothly within more realistic constraints. Planned Flexibility Drawing on some of the traditional methodologies such as PRINCE2, a good starting point for aspiring to be flexible is by planning for it from the start. Planning flexibility comes in a few forms. For one, you can regularly identify and log potential risks as a generally good, on-going habit over the course of the project. This essentially just involves scanning the horizon for potential blips on a regular basis (for example weekly) by consulting with your team and documenting it somewhere. It means you have a checkpoint when you sit down and make sure you’re minimising what will or may catch you by surprise. A good time to do this is in a weekly catch up meeting. It’s not going to fix all your problems, but it will make sure you have a head start on the ones you can see coming. On the subject of team meetings, setting up recurring project events, including a weekly call, a weekly team meeting and (depending on the size of the project) I like to try also do a stand-up as often as possible. Keeping everyone involved and bought in to a project is going to help you infinitely when you need to spot a problem or manage changes to the plan. It will be the difference between your designer spotting an issue and making a mental note to ‘tell you later’, and them actually coming over to tell you directly and immediately. Despite the overhead of meetings, and looping people into stages that they aren’t directly responsible for, the business benefits are chances for success are drastically increased. Planning in, and being aware of how important your team is, will help you be flexible. Building contingency (formally know as slack) into your project plan from the word go is another well-known and essential way of planning to be flexible. Your project plan will change a lot over the course of a project, but there are still the days that you estimate a job will take, and the days you should actually plan in. Most sensible management teams understand that budgets need to be agreed with this slack in mind or you will not be able to deliver a quality service. I believe that commercial awareness is one of the most valuable skills a project manager can have, but penny pinching will ruin client and team relationships, destroy buy-in and creativity, and often end you up with a much more expensive, hacky, and resented product. It’s not a justification to let budgets spiral out of control, but a way of thinking about the bigger picture and wider plan of the company itself. It’s unlikely you want high staff turnover because everyone fell out while you were screaming money at them and they didn’t feel like they could do a good job. It’s also unlikely that you will be able to deliver quality products, which will win you a strong reputation and subsequently bigger and better projects. Evaluating risk factors and building in the right amount of slack from the start will give you more wriggle room when you need to adapt and react. On the flip side, also keeping an overview of the wider workload (that you’re not necessarily responsible for), and knowing who to talk if resource is becoming free or needs filling, is another handy way of being able to react quickly and ensuring your management system is respected. You want pockets of backup time planned in, but you also want everyone being as productive as they can most of the time. Never run at 100% capacity: as soon as something does need to change, you’re left with nowhere to move. Transparency Having a client or stakeholder that trusts you is a really powerful aid in any regard, but especially so when you need to communicate an issue or new suggestion. Positioning yourself and your team as experts and taking the time to delve into the wider picture—and the goals surrounding your client’s reasons to commission the project in the first place—will make you more valuable to them. Clients and stakeholders will always be different, and sometimes you will get people who are just plain difficult, but more often than not people will listen if you’re willing to talk and explain things. As I’m sure all of us have realised at one point or another, a lot of people think they know what they want, and it’s usually the wrong thing. Managing key stakeholders in your project is arguably your biggest challenge, if they are on the your side and feel like the team is genuinely working to give them something of quality and value, then they will make your job easier. It’s often down to you to educate them, and to help them recognise and understand the work involved and you and your team’s reasoning behind your decisions. Being overly submissive or overly secretive will foster a dynamic in which they feel expected to steer the project. In this situation they may not respect the team’s suggestions or may come up with some unreasonable and counterproductive ideas that are likely to hinder progress and lower morale. Getting the stakeholder on board and making them feel a part of the wider picture will make things easier. Pushing back and challenging ideas or working hard to justify something they don’t quite understand will often work in your favour and protects your team. On quite a basic level it also shows you care and are invested; on another, it shows you feel confident in your expertise within your field and that is ultimately the reason they hired you. Taking the time to think about and be aware of this relationship, will make it easier to be flexible and handle new ideas or suggestions that pop up as the project goes along. Change doesn’t need to be ‘scope creep’ if it’s raised in a practical, value-orientated, and level headed discussion. There is usually a way forward for new ideas, as long as they’re valuable and support the wider goals. Maybe the deadline gets pushed back, maybe you get more budget, maybe the client is happy to forgo something else. As long as there’s value and reason, it shows integrity to the project and respect for its success. You can’t expect for this to go smoothly without having invested in the client relationship, so it’s a large point in paving the way to handling change well. Reactive Flexibility Finally, if you’ve been doing this for a while, you’ll know by now that you can’t anticipate everything. Sometimes you will have to react and change the plan under circumstances that aren’t easy. When an unexpected problem first rears its head—a client’s casual afterthought that’s threatening the scope of the project, an internal resource conflict, a junior member of staff that’s not grasping the ropes quite as quickly as you’d hoped—you have to react quickly. In his book, ‘Pitch Anything’, Oren Klaff talks about people’s first reactions being processed by their ‘crocodile brain’ before they’ve had a chance to refine and digest the information more intelligibly. As project managers, product owners, or scrum masters, it’s natural for our immediate reactions to an unexpected problem to cause a pang of stress. But after that initial jolt you need to turn to practical solutions and start racking your brain for different ways forward. It’s here you need to remember to not let your imagination get the better of you, especially if you’ve been putting in the legwork with your team and your client. There is always a way forward and moments like this can be a good opportunity to develop your negotiation and diplomacy skills. Don’t let your immediate reaction be shutting the problem down; instead, take a second to think about it before you decide on the best direction. In a stressful situation, your first idea probably won’t be your best one. From an internal point of view, it’s very important that whatever went wrong doesn’t turn into a finger pointing exercise and you don’t lose your cool. Getting caught up in a blame game or a witch hunt is never productive. Relationship cultivating can sometimes be the pillar that gets you through a stressful blip. Biggest tip for staying flexible when you’re reacting to a problem—apart form obviously thinking of ways forward—is to communicate. Don’t go quiet until you feel like you have a plan, you’ll often need to put everyone else at ease before you can move things forward. Problem solving is part of the job and will need to happen in even the most flexible of product delivery systems. In conclusion, being flexible is never simple but there are things you can do to make your life easier. Owning a position of expertise, putting together a team that’s involved in each other’s work and cultivating a client/stakeholder relationship that’s as transparent and respectful as possible will get you a long way. In times of crisis, believe in your skills and be open to adapting over getting frustrated. 2016 Gillian Sibthorpe gilliansibthorpe 2016-12-04T00:00:00+00:00 https://24ways.org/2016/flexible-project-management/ process
301 Stretching Time Time is valuable. It’s a precious commodity that, if we’re not too careful, can slip effortlessly through our fingers. When we think about the resources at our disposal we’re often guilty of forgetting the most valuable resource we have to hand: time. We are all given an allocation of time from the time bank. 86,400 seconds a day to be precise, not a second more, not a second less. It doesn’t matter if we’re rich or we’re poor, no one can buy more time (and no one can save it). We are all, in this regard, equals. We all have the same opportunity to spend our time and use it to maximum effect. As such, we need to use our time wisely. I believe we can ‘stretch’ time, ensuring we make the most of every second and maximising the opportunities that time affords us. Through a combination of ‘Structured Procrastination’ and ‘Focused Finishing’ we can open our eyes to all of the opportunities in the world around us, whilst ensuring that we deliver our best work precisely when it’s required. A win win, I’m sure you’ll agree. Structured Procrastination I’m a terrible procrastinator. I used to think that was a curse – “Why didn’t I just get started earlier?” – over time, however, I’ve started to see procrastination as a valuable tool if it is used in a structured manner. Don Norman refers to procrastination as ‘late binding’ (a term I’ve happily hijacked). As he argues, in Why Procrastination Is Good, late binding (delay, or procrastination) offers many benefits: Delaying decisions until the time for action is beneficial… it provides the maximum amount of time to think, plan, and determine alternatives. We live in a world that is constantly changing and evolving, as such the best time to execute is often ‘just in time’. By delaying decisions until the last possible moment we can arrive at solutions that address the current reality more effectively, resulting in better outcomes. Procrastination isn’t just useful from a project management perspective, however. It can also be useful for allowing your mind the space to wander, make new discoveries and find creative connections. By embracing structured procrastination we can ‘prime the brain’. As James Webb Young argues, in A Technique for Producing Ideas, all ideas are made of other ideas and the more we fill our minds with other stimuli, the greater the number of creative opportunities we can uncover and bring to life. By late binding, and availing of a lack of time pressure, you allow the mind space to breathe, enabling you to uncover elements that are important to the problem you’re working on and, perhaps, discover other elements that will serve you well in future tasks. When setting forth upon the process of writing this article I consciously set aside time to explore. I allowed myself the opportunity to read, taking in new material, safe in the knowledge that what I discovered – if not useful for this article – would serve me well in the future. Ron Burgundy summarises this neatly: Procrastinator? No. I just wait until the last second to do my work because I will be older, therefore wiser. An ‘older, therefore wiser’ mind is a good thing. We’re incredibly fortunate to live in a world where we have a wealth of information at our fingertips. Don’t waste the opportunity to learn, rather embrace that opportunity. Make the most of every second to fill your mind with new material, the rewards will be ample. Deadlines are deadlines, however, and deadlines offer us the opportunity to focus our minds, bringing together the pieces of the puzzle we found during our structured procrastination. Like everyone I’ll hear a tiny, but insistent voice in my head that starts to rise when the deadline is approaching. The older you get, the closer to the deadline that voice starts to chirp up. At this point we need to focus. Focused Finishing We live in an age of constant distraction. Smartphones are both a blessing and a curse, they keep us connected, but if we’re not careful the constant connection they provide can interrupt our flow. When a deadline is accelerating towards us it’s important to set aside the distractions and carve out a space where we can work in a clear and focused manner. When it’s time to finish, it’s important to avoid context switching and focus. All those micro-interactions throughout the day – triaging your emails, checking social media and browsing the web – can get in the way of you hitting your deadline. At this point, they’re distractions. Chunking tasks and managing when they’re scheduled can improve your productivity by a surprising order of magnitude. At this point it’s important to remove distractions which result in ‘attention residue’, where your mind is unable to focus on the current task, due to the mental residue of other, unrelated tasks. By focusing on a single task in a focused manner, it’s possible to minimise the negative impact of attention residue, allowing you to maximise your performance on the task at hand. Cal Newport explores this in his excellent book, Deep Work, which I would highly recommend reading. As he puts it: Efforts to deepen your focus will struggle if you don’t simultaneously wean your mind from a dependence on distraction. To help you focus on finishing it’s helpful to set up a work-focused environment that is purposefully free from distractions. There’s a time and a place for structured procrastination, but – equally – there’s a time and a place for focused finishing. The French term ‘mise en place’ is drawn from the world of fine cuisine – I discovered it when I was procrastinating – and it’s applicable in this context. The term translates as ‘putting in place’ or ‘everything in its place’ and it refers to the process of getting the workplace ready before cooking. Just like a professional chef organises their utensils and arranges their ingredients, so too can you. Thanks to the magic of multiple users on computers, it’s possible to create a separate user on your computer – without access to email and other social tools – so that you can switch to that account when you need to focus and hit the deadline. Another, less technical way of achieving the same result – depending, of course, upon your line of work – is to close your computer and find some non-digital, unconnected space to work in. The goal is to carve out time to focus so you can finish. As Newport states: If you don’t produce, you won’t thrive – no matter how skilled or talented you are. Procrastination is fine, but only if it’s accompanied by finishing. Create the space to finish and you’ll enjoy the best of both worlds. In closing… There is a time and a place for everything: there is a time to procrastinate, and a time to focus. To truly reap the rewards of time, the mind needs both. By combining the processes of ‘Structured Procrastination’ and ‘Focused Finishing’ we can make the most of our 86,400 seconds a day, ensuring we are constantly primed to make new discoveries, but just as importantly, ensuring we hit the all-important deadlines. Make the most of your time, you only get so much. Use every second productively and you’ll be thankful that you did. Don’t waste your time, once it’s gone, it’s gone… and you can never get it back. 2016 Christopher Murphy christophermurphy 2016-12-21T00:00:00+00:00 https://24ways.org/2016/stretching-time/ process
300 Taking Device Orientation for a Spin When The Police sang “Don’t Stand So Close To Me” they weren’t talking about using a smartphone to view a panoramic image on Facebook, but they could have been. For years, technology has driven relentlessly towards devices we can carry around in our pockets, and now that we’re there, we’re expected to take the thing out of our pocket and wave it around in front of our faces like a psychotic donkey in search of its own dangly carrot. But if you can’t beat them, join them. A brave new world A couple of years back all sorts of specs for new HTML5 APIs sprang up much to our collective glee. Emboldened, we ran a few tests and found they basically didn’t work in anything and went off disheartened into the corner for a bit of a sob. Turns out, while we were all busy boohooing, those browser boffins have actually being doing some work, and lo and behold, some of these APIs are even half usable. Mostly literally half usable—we’re still talking about browsers, after all. Now, of course they’re all a bit JavaScripty and are going to involve complex methods and maths and science and probably about a thousand dependancies from Github that will fall out of fashion while we’re still trying to locate the documentation, right? Well, no! So what if we actually wanted to use one of these APIs, say to impress our friends with our ability to make them wave their phones in front of their faces (because no one enjoys looking hapless more than the easily-technologically-impressed), how could we do something like that? Let’s find out. The Device Orientation API The phone-wavy API is more formally known as the DeviceOrientation Event Specification. It does a bunch of stuff that basically doesn’t work, but also gives us three values that represent orientation of a device (a phone, a tablet, probably not a desktop computer) around its x, y and z axes. You might think of it as pitch, roll and yaw if you like to spend your weekends wearing goggles and a leather hat. The main way we access these values is through an event listener, which can inform our code every time the value changes. Which is constantly, because you try and hold a phone still and then try and hold the Earth still too. The API calls those pitch, roll and yaw values alpha, beta and gamma. Chocks away: window.addEventListener('deviceorientation', function(e) { console.log(e.alpha); console.log(e.beta); console.log(e.gamma); }); If you look at this test page on your phone, you should be able to see the numbers change as you twirl the thing around your body like the dance partner you never had. Wrist strap recommended. One important note Like may of these newfangled APIs, Device Orientation is only available over HTTPS. We’re not allowed to have too much fun without protection, so make sure that you’re working on a secure line. I’ve found a quick and easy way to share my local dev environment over TLS with my devices is to use an ngrok tunnel. ngrok http -host-header=rewrite mylocaldevsite.dev:80 ngrok will then set up a tunnel to your dev site with both HTTP and HTTPS URL options. You, of course, want the HTTPS option. Right, where were we? Make something to look at It’s all well and good having a bunch of numbers, but they’re no use unless we do something with them. Something creative. Something to inspire the generations. Or we could just build that Facebook panoramic image viewer thing (because most of us are familiar with it and we’re not trying to be too clever here). Yeah, let’s just build one of those. Our basic framework is going to be similar to that used for an image carousel. We have a container, constrained in size, and CSS overflow property set to hidden. Into this we place our wide content and use positioning to move the content back and forth behind the ‘window’ so that the part we want to show is visible. Here it is mocked up with a slider to set the position. When you release the slider, the position updates. (This actually tests best on desktop with your window slightly narrowed.) The details of the slider aren’t important (we’re about to replace it with phone-wavy goodness) but the crucial part is that moving the slider results in a function call to position the image. This takes a percentage value (0-100) with 0 being far left and 100 being far right (or ‘alt-nazi’ or whatever). var position_image = function(percent) { var pos = (img_W / 100)*percent; img.style.transform = 'translate(-'+pos+'px)'; }; All this does is figure out what that percentage means in terms of the image width, and set the transform: translate(…); CSS property to move the image. (We use translate because it might be a bit faster to animate than left/right positioning.) Ok. We can now read the orientation values from our device, and we can programatically position the image. What we need to do is figure out how to convert those raw orientation values into a nice tidy percentage to pass to our function and we’re done. (We’re so not done.) The maths bit If we go back to our raw values test page and make-believe that we have a fascinating panoramic image of some far-off beach or historic monument to look at, you’ll note that the main value that is changing as we swing back and forth is the ‘alpha’ value. That’s the one we want to track. As our goal here is hey, these APIs are interesting and fun and not let’s build the world’s best panoramic image viewer, we’ll start by making a few assumptions and simplifications: When the image loads, we’ll centre the image and take the current nose-forward orientation reading as the middle. Moving left, we’ll track to the left of the image (lower percentage). Moving right, we’ll track to the right (higher percentage). If the user spins round, does cartwheels or loads the page then hops on a plane and switches earthly hemispheres, they’re on their own. Nose-forward When the page loads, the initial value of alpha gives us our nose-forward position. In Safari on iOS, this is normalised to always be 0, whereas most everywhere else it tends to be bound to pointy-uppy north. That doesn’t really matter to us, as we don’t know which direction the user might be facing in anyway — we just need to record that initial state and then use it to compare any new readings. var initial_position = null; window.addEventListener('deviceorientation', function(e) { if (initial_position === null) { initial_position = Math.floor(e.alpha); }; var current_position = initial_position - Math.floor(e.alpha); }); (I’m rounding down the values with Math.floor() to make debugging easier - we’ll take out the rounding later.) We get our initial position if it’s not yet been set, and then calculate the current position as a difference between the new value and the stored one. These values are weird One thing you need to know about these values, is that they range from 0 to 360 but then you also get weird left-of-zero values like -2 and whatever. And they wrap past 360 back to zero as you’d expect if you do a forward roll. What I’m interested in is working out my rotation. If 0 is my nose-forward position, I want a positive value as I turn right, and a negative value as I turn left. That puts the awkward 360-tipping point right behind the user where they can’t see it. var rotation = current_position; if (current_position > 180) rotation = current_position-360; Which way up? Since we’re talking about orientation, we need to remember that the values are going to be different if the device is held in portrait on landscape mode. See for yourself - wiggle it like a steering wheel and you get different values. That’s easy to account for when you know which way up the device is, but in true browser style, the API for that bit isn’t well supported. The best I can come up with is: var screen_portrait = false; if (window.innerWidth < window.innerHeight) { screen_portrait = true; } It works. Then we can use screen_portrait to branch our code: if (screen_portrait) { if (current_position > 180) rotation = current_position-360; } else { if (current_position < -180) rotation = 360+current_position; } Here’s the code in action so you can see the values for yourself. If you change screen orientation you’ll need to refresh the page (it’s a demo!). Limiting rotation Now, while the youth of today are rarely seen without a phone in their hands, it would still be unreasonable to ask them to spin through 360° to view a photo. Instead, we need to limit the range of movement to something like 60°-from-nose in either direction and normalise our values to pan the entire image across that 120° range. -60 would be full-left (0%) and 60 would be full-right (100%). If we set max_rotation = 60, that code ends up looking like this: if (rotation > max_rotation) rotation = max_rotation; if (rotation < (0-max_rotation)) rotation = 0-max_rotation; var percent = Math.floor(((rotation + max_rotation)/(max_rotation*2))*100); We should now be able to get a rotation from -60° to +60° expressed as a percentage. Try it for yourself. The big reveal All that’s left to do is pass that percentage to our image positioning function and would you believe it, it might actually work. position_image(percent); You can see the final result and take it for a spin. Literally. So what have we made here? Have we built some highly technical panoramic image viewer to aid surgeons during life-saving operations using only JavaScript and some slightly questionable mathematics? No, my friends, we have not. Far from it. What we have made is progress. We’ve taken a relatively newly available hardware API and a bit of simple JavaScript and paired it with existing CSS knowledge and made something that we didn’t have this morning. Something we probably didn’t even want this morning. Something that if you take a couple of steps back and squint a bit might be a prototype for something vaguely interesting. But more importantly, we’ve learned that our browsers are just a little bit more capable than we thought. The web platform is maturing rapidly. There are new, relatively unexplored APIs for doing all sorts of crazy thing that are often dismissed as the preserve of native apps. Like some sort of app marmalade. Poppycock. The web is an amazing, exciting place to create things. All it takes is some base knowledge of the fundamentals, a creative mind and a willingness to learn. We have those! So let’s create things. 2016 Drew McLellan drewmclellan 2016-12-24T00:00:00+00:00 https://24ways.org/2016/taking-device-orientation-for-a-spin/ code
299 What the Heck Is Inclusive Design? Naming things is hard. And I don’t just mean CSS class names and JSON properties. Finding the right term for what we do with the time we spend awake and out of bed turns out to be really hard too. I’ve variously gone by “front-end developer”, “user experience designer”, and “accessibility engineer”, all clumsy and incomplete terms for labeling what I do as an… erm… see, there’s the problem again. It’s tempting to give up entirely on trying to find the right words for things, but this risks summarily dispensing with thousands of years spent trying to qualify the world around us. So here we are again. Recently, I’ve been using the term “inclusive design” and calling myself an “inclusive designer” a lot. I’m not sure where I first heard it or who came up with it, but the terminology feels like a good fit for the kind of stuff I care to do when I’m not at a pub or asleep. This article is about what I think “inclusive design” means and why I think you might like it as an idea. Isn’t ‘inclusive design’ just ‘accessibility’ by another name? No, I don’t think so. But that’s not to say the two concepts aren’t related. Note the ‘design’ part in ‘inclusive design’ — that’s not just there by accident. Inclusive design describes a design activity; a way of designing things. This sets it apart from accessibility — or at least our expectations of what ‘accessibility’ entails. Despite every single accessibility expert I know (and I know a lot) recommending that accessibility should be integrated into design process, it is rarely ever done. Instead, it is relegated to an afterthought, limiting its effect. The term ‘accessibility’ therefore lacks the power to connote design process. It’s not that we haven’t tried to salvage the term, but it’s beginning to look like a lost cause. So maybe let’s use a new term, because new things take new names. People get that. The ‘access’ part of accessibility is also problematic. Before we get ahead of ourselves, I don’t mean access is a problem — access is good, and the more accessible something is the better. I mean it’s not enough by itself. Imagine a website filled with poorly written and lackadaisically organized information, including a bunch of convoluted and confusing functionality. To make this site accessible is to ensure no barriers prevent people from accessing the content. But that doesn’t make the content any better. It just means more people get to suffer it. Whoopdidoo. Access is certainly a prerequisite of inclusion, but accessibility compliance doesn’t get you all the way there. It’s possible to check all the boxes but still be left with an unusable interface. And unusable interfaces are necessarily inaccessible ones. Sure, you can take an unusable interface and make it accessibility compliant, but that only placates stakeholders’ lawyers, not users. Users get little value from it. So where have we got to? Access is important, but inclusion is bigger than access. Inclusive design means making something valuable, not just accessible, to as many people as we can. So inclusive design is kind of accessibility + UX? Closer, but there are some problems with this definition. UX is, you will have already noted, a broad term encompassing activities ranging from conducting research studies to optimizing the perceived affordance of interface elements. But overall, what I take from UX is that it’s the pursuit of making interfaces understandable. As it happens, WCAG 2.0 already contains an ‘Understandable’ principle covering provisions such as readability, predictability and feedback. So you might say accessibility — at least as described by WCAG — already covers UX. Unfortunately, the criteria are limited, plus some really important stuff (like readability) is relegated to the AAA level; essentially “bonus points if you get the time (you won’t).” So better to let UX folks take care of this kind of thing. It’s what they do. Except, therein lies a danger. UX professionals don’t tend to be well versed in accessibility, so their ‘solutions’ don’t tend to work for that many people. My friend Billy Gregory coined the term SUX, or “Some UX”: if it doesn’t work for different users, it’s only doing part of the job it should be. SUX won’t do, but it’s not just a disability issue. All sorts of user circumstances go unchecked when you’re shooting straight for what people like, and bypassing what people need: device type, device settings, network quality, location, native language, and available time to name just a few. In short, inclusive design means designing things for people who aren’t you, in your situation. In my experience, mainstream UX isn’t very good at that. By bolting accessibility onto mainstream UX we labor under the misapprehension that most people have a ‘normal’ experience, a few people are exceptions, and that all of the exceptions pertain to disability directly. So inclusive design isn’t really about disability? It is about disability, but not in the same way as accessibility. Accessibility (as it is typically understood, anyway) aims to make sure things work for people with clinically recognized disabilities. Inclusive design aims to make sure things work for people, not forgetting those with clinically recognized disabilities. A subtle, but not so subtle, difference. Let’s go back to discussing readability, because that’s a good example. Now: everyone benefits from readable text; text with concise sentences and widely-understood words. It certainly helps people with cognitive impairments, but it doesn’t hinder folks who have less trouble with comprehension. In fact, they’ll more than likely be thankful for the time saved and the clarity. Readable text covers the whole gamut. It’s — you’ve got it — inclusive. Legibility is another one. A clear, well-balanced typeface makes the reading experience less uncomfortable and frustrating for all concerned, including those who have various forms of visual dyslexia. Again, everyone’s happy — so why even contemplate a squiggly, sketchy typeface? Leave well alone. Contrast too. No one benefits from low contrast; everyone benefits from high contrast. Simple. There’s no more work involved, it just entails better decision making. And that’s what design is really: decision making. How about zoom support? If you let your users pinch zoom on their phones they can compensate for poor eyesight, but they can also increase the touch area of controls, inspect detail in images, and compose better screen shots. Unobtrusively supporting options like zoom makes interfaces much more inclusive at very little cost. And when it comes to the underlying HTML code, you’re in luck: it has already been designed, from the outset, to be inclusive. HTML is a toolkit for inclusion. Using the right elements for the job doesn’t just mean the few who use screen readers benefit, but keyboard accessibility comes out-of-the-box, you can defer to browser behavior rather than writing additional scripts, the code is easier to read and maintain, and editors can create content that is effortlessly presentable. Wait… are you talking about universal design? Hmmm. Yes, I guess some folks might think of “universal design” and “inclusive design” as synonymous. I just really don’t like the term universal in this context. The thing is, it gives the impression that you should be designing for absolutely everyone in the universe. Though few would adopt a literal interpretation of “universal” in this context, there are enough developers who would deliberately misconstrue the term and decry universal design as an impossible task. I’ve actually had people push back by saying, “what, so I’ve got to make it work for people who are allergic to computers? What about people in comas?” For everyone’s sake, I think the term ‘inclusive’ is less misleading. Of course you can’t make things that everybody can use — it’s okay, that’s not the aim. But with everything that’s possible with web technologies, there’s really no need to exclude people in the vast numbers that we usually are. Accessibility can never be perfect, but by thinking inclusively from planning, through prototyping to production, you can cast a much wider net. That means more and happier users at very little if any more effort. If you like, inclusive design is the means and accessibility is the end — it’s just that you get a lot more than just accessibility along the way. Conclusion That’s inclusive design. Or at least, that’s a definition for a thing I think is a good idea which I identify as inclusive design. I’ll leave you with a few tips. Involve code early Web interfaces are made of code. If you’re not working with code, you’re not working on the interface. That’s not to say there’s anything wrong with sketching or paper prototyping — in fact, I recommend paper prototyping in my book on inclusive design. Just work with code as soon as you can, and think about code even before that. Maintain a pattern library of coded solutions and omit any solutions that don’t adhere to basic accessibility guidelines. Respect conventions Your content should be fresh, inventive, radical. Your interface shouldn’t. Adopt accepted conventions in the appearance, placement and coding of interface elements. Users aren’t there to experience interface design; they’re there to use an interface. In other words: stop showing off (unless, of course, the brief is to experiment with new paradigms in interface design, for an audience of interface design researchers). Don’t be exact “Perfection is the enemy of good”. But the pursuit of perfection isn’t just to be avoided because nothing ever gets finished. Exacting design also makes things inflexible and brittle. If your design depends on elements retaining precise coordinates, they’ll break easily when your users start adjusting font settings or zooming. Choose not to position elements exactly or give them fixed, “magic number” dimensions. Make less decisions in the interface so your users can make more decisions for it. Enforce simplicity The virtue of simplicity is difficult to overestimate. The simpler an interface is, the easier it is to use for all kinds of users. Simpler interfaces require less code to make too, so there’s an obvious performance advantage. There are many design decisions that require user research, but keeping things simple is always the right thing to do. Not simplified or simple-seeming or simplistic, but simple. Do a little and do it well, for as many people as you can. 2016 Heydon Pickering heydonpickering 2016-12-07T00:00:00+00:00 https://24ways.org/2016/what-the-heck-is-inclusive-design/ process
298 First Steps in VR The web is all around us. As web folk, it is our responsibility to consider the impact our work can have. Part of this includes thinking about the future; the web changes lives and if we are building the web then we are the ones making decisions that affect people in every corner of the world. I find myself often torn between wanting to make the right decisions, and just wanting to have fun. To fiddle and play. We all know how important it is to sometimes just try ideas, whether they will amount to much or not. I think of these two mindsets as production and prototyping, though of course there are lots of overlap and phases in between. I mention this because virtual reality is currently seen as a toy for rich people, and in some ways at the moment it is. But with WebVR we are able to create interesting experiences with a relatively low entry point. I want us to have open minds, play around with things, and then see how we can use the tools we have at our disposal to make things that will help people. Every year we see articles saying it will be the “year of virtual reality”, that was especially prevalent this year. 2016 has been a year of progress, VR isn’t quite mainstream but with efforts like Playstation VR and Google Cardboard, we are definitely seeing much more of it. This year also saw the consumer editions of the Oculus Rift and HTC Vive. So it does seem to be a good time for an overview of how to get involved with creating virtual reality on the web. WebVR is an API for connecting to devices and retrieving continuous data such as the position and orientation. Unlike the Web Audio API and some other APIs, WebVR does not feel like a framework. You use it however you want, taking the data and using it as you wish. To make it easier, there are plenty of resources such as Three.js, A-Frame and ReactVR that help to make the heavy lifting a bit easier. Getting Started with A-Frame I like taking the opportunity to learn new things whenever I can. So while planning this article I thought that instead of trying to teach WebGL or even Three.js in a way that is approachable for all, I would create my first project using A-Frame and write about that. This is not a tutorial as such, I just want to show how to go about getting involved with VR. The beauty of A-Frame is that it is very similar to web components, you can just write HTML to build worlds that will automatically work on all the different types of devices. It uses WebGL and WebVR but in such a way that it quite drastically reduces the learning curve. That’s not to say you can’t build complex things, you have complete access to write JavaScript and shaders. I’m lazy. Whenever I learn a new language or framework I have found that the best way, personally, for me to learn is to have a project and to copy the starting code from someone else. A project lets you have a good idea of what you want to produce and it means you can ignore a lot of the irrelevant documentation, focussing purely on what you need. That reduces the stress of figuring things out. Copying code also makes it easier, because you know your boilerplate code is working. There’s nothing worse than getting stuck before anything actually works the first time. So I tinker. I take code and I modify it, I play around. It’s fun. For this project I wanted to keep things as simple as possible, so I can easily explain it without the classic “draw a circle then draw an owl”. I wrote a list of requirements, with some stretch goals that you can give a try yourself if you fancy: Must work on Google Cardboard at a minimum, because of price Therefore, it must not rely on having a controller Auto-moving around a maze would be a good example Move in direction you look Stretch goal: Scoring, time until you hit a wall or get stuck in maze Stretch goal: Levels, so the map doesn’t need to be random Stretch goal: Snow! I decided to base this project on an example, Platforms, by Don McCurdy who wrote the really useful aframe-extras. Platforms has random 3D blocks that you can jump onto, going up into the sky. So I took his code and reduced it so that the blocks are randomly spread on the ground. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>24 ways</title> <script src="https://aframe.io/releases/0.3.2/aframe.js"></script> <script src="//cdn.rawgit.com/donmccurdy/aframe-extras/v2.6.1/dist/aframe-extras.min.js"></script> </head> <body> <a-scene> <a-entity id="player" camera universal-controls kinematic-body position="0 1.8 0"> </a-entity> <a-entity id="walls"></a-entity> <a-grid id="ground" static-body></a-grid> <a-sky id="sky" color="#AADDF0"></a-sky> <!-- Lighting --> <a-light type="ambient" color="#ccc"></a-light> </a-scene> <script> document.querySelector('a-scene').addEventListener('render-target-loaded', function () { var MAP_SIZE = 10, PLATFORM_SIZE = 5, NUM_PLATFORMS = 50; var platformsEl = document.querySelector('#walls'); var v, box; for (var i = 0; i < NUM_PLATFORMS; i++) { // y: 0 is ground v = { x: (Math.floor(Math.random() * MAP_SIZE) - PLATFORM_SIZE) * PLATFORM_SIZE, y: PLATFORM_SIZE / 2, z: (Math.floor(Math.random() * MAP_SIZE) - PLATFORM_SIZE) * PLATFORM_SIZE }; box = document.createElement('a-box'); platformsEl.appendChild(box); box.setAttribute('color', '#39BB82'); box.setAttribute('width', PLATFORM_SIZE); box.setAttribute('height', PLATFORM_SIZE); box.setAttribute('depth', PLATFORM_SIZE); box.setAttribute('position', v.x + ' ' + v.y + ' ' + v.z); box.setAttribute('static-body', ''); } console.info('Platforms loaded.'); }); </script> </body> </html> As you can see, this is very readable. Especially if you ignore the JavaScript that is used to create the maze. A-Frame (with A-Frame Extras) gives you a lot of power with relatively little to learn. We start with an <a-scene> which is the container for everything that is going to show up on the screen. There are a few <a-entity> which can be compared to <div> as they are essentially non-semantic containers, able to be used for any purpose. The attributes are used to define functionality, for example the camera attribute sets the entity to function as a camera and kinematic-body makes it collide instead of go through objects. Attributes are also used to set position and sizes, often using JavaScript to dynamically define them. Styling Now we’ve got the HTML written, we need to style it. To do this we add A-Frame compatible attributes such as color and material. I recommend playing around, you can get some quite impressive effects fairly easily. Originally I wanted a light snowy maze but it ended up being dark and foggy, as I really liked the feeling it gave. Note, you will probably need a server running for images to work. You can do this by running python -m "SimpleHTTPServer" in the folder where the code is, then go to localhost:8000 in browser. Textures Unless you are going for a cartoony style, you probably want to find some textures. I found some on textures.com, one image worked well for the walls and the other for the floor. <a-assets> <img id="texture-floor" src="floor.jpg"> <img id="texture-wall" src="wall.jpg"> </a-assets> The <a-assets> is used to define (as well as preload and cache) all assets, including images, audio and video. As you can see, images in the Asset Management System just use normal img tags. The ids are important here as we can use them later for using the textures. To apply a texture to an object, you create a material. For a simple material where it just shows the image, you set the src to the id selector of the image. Replace: <a-grid id="ground" static-body></a-grid> With: <a-grid id="ground" static-body material="src: #texture-floor"></a-grid> This will automatically make the image repeat over the entire floor, in my case filling it with bricks. The walls are pretty much identical, with the slight exception that it is set in JavaScript as they are dynamically defined. box.setAttribute('material', 'src: #texture-wall'); That’s it for the textures, for now at least. These will not look completely realistic, as the light will bump off the rectangular wall rather than texture itself. This can be improved by using maps, textures that are used to modify the shape and physical properties of the object. Lighting The next part of styling is lighting. By using fog and different types of lighting, we are able to add atmospheric details to the game to make it feel that bit more realistic and polished. There are lots of types of light in A-Frame (most coming from Three.js). You can add a light either by using the <a-light> entity or by attaching a light attribute to any other entity. If there are no lights defined then A-Frame adds some by default so that the scene is always lit. To start with I wanted to light up the scene with a general light, type="ambient", so that the whole game felt slightly dark. I chose to set the light to a reddish colour #92455E. After playing around with intensity I chose 0.4, it added enough light to get the feeling I wanted without it being overly red. I also added a blue skybox (<a-sky>), as it looked a bit odd with a white sky. <a-light type="ambient" color="#92455E" intensity="0.4"></a-light> <a-sky id="sky" color="#0000ff"></a-sky> I felt that the maze looked good with a red tinge but it was a bit flat, everything was the same colour and it was a bit dark. So I added a light within the #player entity, this could have been as an attribute but I set it as a child a-light instead. By using type="point" with a high intensity and low distance, it showed close walls as being lighter. It also added a sort-of object to the player, it isn’t a walking human or anything but by moving light where the player is it feels a bit more physical. <a-light color="#fff" distance="5" intensity="0.7" type="point"></a-light> By this point it was starting to look decent, so I wanted to add the fog to really give some personality and depth to the maze. To do this I added the fog attribute to the <a-scene> with type=exponential so it looks thicker the further away it is and a mid intensity, so you feel a bit lost but can still see. I was very happy with this result. It took a lot of playing around with colours and values, which is fun in itself. I highly recommend you take the code (or write your own) and play around with the numbers. Movement One of the reasons I decided to use aframe-extras is that it has a few different camera controls built in. As you saw earlier, I am using the universal-controls which gives WASD (keyboard) controls by default. I wanted to make it automatically move in the direction that you’re looking, but I wasn’t quite sure how without rewriting the controls. So I asked Don McCurdy for advice and he very nicely gave me a small snippet of code to get it working. AFRAME.registerComponent('automove-controls', { init: function () { this.speed = 0.1; this.isMoving = true; this.velocityDelta = new THREE.Vector3(); }, isVelocityActive: function () { return this.isMoving; }, getVelocityDelta: function () { this.velocityDelta.z = this.isMoving ? -speed : 0; return this.velocityDelta.clone(); } }); Replace: universal-controls With: universal-controls="movementControls: automove, gamepad, keyboard" This works by creating a component automove-controls that adds auto-move to the player without overriding movement completely. It doesn’t even touch direction, it just checks if isMoving is true then moves the player by the set speed. Components can be creating for adding all kinds of functionality with relative ease. It makes it very powerful for people of all difficulty levels. Building a map Currently the maze is created randomly, which is great but means there will often be walls that overlap or the player gets trapped with nowhere to go. So to solve this, I decided to use a map editor (Tiled) so that we can create the mazes ourselves. This is a great start towards one of the stretch goals, levels. I made the maze in Tiled by finding a random tileset online (we don’t need to actually show the images), I used one tile for the wall and another for the player. Then I exported as a JavaScript file and modified it in my text editor to get rid of everything I didn’t need. I made it so 0 is the path, 1 is the wall and 2 is the player. I then added the script to the HTML, as a separate file so it’s easy to update in the future. var map = { "data":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], "height":10, "width":10 } As you can see, this gives a simple 10x10 maze with some dead ends. The player starts in the bottom right corner (my choice, could be anywhere). I rewrote the random platforms code (from Don’s example) to instead loop over the map data and place walls where it is 1 and position the player where data is 2. I set the position so that the origin of the map would be 0,1.5,0. The y axis is in this case the height (ground being 0), but if a wall is positioned at 0 by its centre then some of it is underground. So the y needed to be the height divided by 2. document.querySelector('a-scene').addEventListener('render-target-loaded', function () { var WALL_SIZE = 5, WALL_HEIGHT = 3; var el = document.querySelector('#walls'); var wall; for (var x = 0; x < map.height; x++) { for (var y = 0; y < map.width; y++) { var i = y*map.width + x; var position = (x-map.width/2)*WALL_SIZE + ' ' + 1.5 + ' ' + (y-map.height/2)*WALL_SIZE; if (map.data[i] === 1) { // Create wall wall = document.createElement('a-box'); el.appendChild(wall); wall.setAttribute('color', '#fff'); wall.setAttribute('material', 'src: #texture-wall;'); wall.setAttribute('width', WALL_SIZE); wall.setAttribute('height', WALL_HEIGHT); wall.setAttribute('depth', WALL_SIZE); wall.setAttribute('position', position); wall.setAttribute('static-body', '); } if (map.data[i] === 2) { // Set player position document.querySelector('#player').setAttribute('position', position); } } } console.info('Walls added.'); }); With this added, it makes it nice and easy to change around the map as well as to add new features. Perhaps you want monsters or objects. Just set the number in the map data and add an if statement to the loop. In the future you could add layers, so multiple things can be in the same position. Or perhaps even make the maze go up the y axis too, with ramps or staircases. There’s a lot you can do with relative ease. As you can see, A-Frame really does reduce the learning curve of 3D and VR on the web. It’s Not All Fun And Games A lot of examples of virtual reality are games, including this one. So it is understandable to think that VR is for gaming, but actually that’s just a tiny subset. There are all sorts of applications for VR, including story telling, data visualisation and even meditation. There have been a number of cases where it has been shown virtual reality can help as a tool for therapies: Oxford study finds virtual reality can help treat severe paranoia Virtual Reality Therapy for Phobias at the Duke Faculty Practice Bravemind: Virtual Reality Exposure Therapy at the University of Southern California These are just a few examples of where virtual reality is being used around the world to help people feel better and get through some very tough times. There have also been examples of it being used for simulating war zones or medical situations, both as a teaching and journalism tool. Wrapping Up Ten years ago, on this very site, Cameron Moll wrote an article explaining the mobile web. He explained how mobile phones with data plans were becoming increasingly common, that WAP 2.0 included the XHTML Mobile Profile meaning it would be familiar with web folk. “The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it.” We can look at that and laugh a little, we have come a very long way in the last decade. Even people in developing countries with very little money have mobile phones with access to a web that is far more capable than the “desktop web” Cameron was referring to. So while I am not saying virtual reality is going to change the world or replace our phones, who knows! We can use our skills as web folk to dabble, we don’t need to learn any new languages. If on the 2026 edition of 24 ways, somebody references this article and looks at how far we have come… well, let’s hope we have used our skills well and made the world just that little bit better. And if VR is a fad? Well it’s fun… have a go anyway. 2016 Shane Hudson shanehudson 2016-12-11T00:00:00+00:00 https://24ways.org/2016/first-steps-in-vr/ code
297 Public Speaking with a Buddy My book Demystifying Public Speaking focuses on the variety of fears we each have about giving a talk. From presenting to a client, to leading a team standup, to standing on a conference stage, there are lots of things we can do to prepare ourselves for the spotlight and reduce those fears. Though it didn’t make it into the final draft, I wanted to highlight how helpful it can be to share that public speaking spotlight with another person, or a few more people. If you have fears about not knowing the answer to a question, fumbling your words, or making a mistake in the spotlight, then buddying up may be for you! To some, adding more people to a presentation sounds like a recipe for on-stage disaster. To others, having a friendly face nearby—a partner who can step in if you fumble—is incredibly reassuring. As design director Yesenia Perez-Cruz writes, “While public speaking is a deeply personal activity, you don’t have to go it alone. Nothing has helped my speaking career more than turning it into a group effort.” Co-presenting can level up a talk in two ways: an additional brain and presentation skill set can improve the content of the talk itself, and you may feel safer with the on-stage safety net of your buddy. For example, when I started giving lengthy workshops about building mobile device labs with my co-worker Destiny Montague, we brought different experience to the table. I was able to talk about the user experience of our lab, and the importance of testing across different screen sizes. Destiny spoke about the hardware aspects of the lab, like power consumption and networking. Our audience benefitted from the spectrum of insight we included in the talk. Moreover, Destiny and I kept each other energized and engaging while teaching our audience, having way more fun onstage. Partnering up alleviated the risk (and fear!) of fumbling; where one person makes a mistake, the other person is right there to help. Buddy presentations can be helpful if you fear saying “I don’t know” to a question, as there are other people around you who will be able to help answer it from the stage. By partnering with someone whom I trust and respect, and whose work and knowledge augments my own, it made the experience—and the presentation!—significantly better. Co-presenting won’t work if you don’t trust the person you’re onstage with, or if you don’t have good chemistry working together. It might also not work if there’s an imbalance of responsibilities, both in preparing the talk and giving it. Read on for how to make partner talks work to your advantage! Trustworthiness If you want to explore co-presenting, make sure that your presentation partner is trustworthy and can carry their weight; it can be stressful if you find yourself trying to meet deadlines and prepare well and your partner isn’t being helpful. We’re all about reducing the fears and stress levels surrounding being in that spotlight onstage; make sure that the person you’re relying on isn’t making the process harder. Before you start working together, sketch out the breakdown of work and timeline you’re each committing to. Have a conversation about your preferred work style so you each have a concrete understanding of the best ways to communicate (in what medium, and how often) and how to check in on each other’s progress without micromanaging or worrying about radio silence. Ask your buddy how they prefer to receive feedback, and give them your own feedback preferences, so neither of you are surprised or offended when someone’s work style or deliverable needs to be tweaked. This should be a partnership in which you both feel supported; it’s healthy to set all these expectations up front, and create a space in which you can each tweak things as the work progresses. Talk flow and responsibilities There are a few different ways to organize the structure of your talk with multiple presenters. Start by thinking about the breakdown of the talk content—are there discrete parts you and the other presenters can own or deliver? Or does it feel more appropriate to deliver the entirety of the content together? If you’re finding that you can break down the content into discrete chunks, figure out who should own which pieces, and what ownership means. Will you develop the content together but have only one person present the information? Or will one person research and prepare each content section in addition to delivering it solo onstage? Rehearse how handoffs will go between sections so it feels natural, rather than stilted. I like breaking a presentation into “chapters” when I’m passionate about particular aspects of a topic and can speak on those, but know that there are other aspects to be shared and there’s someone else who can handle (and enjoy!) talking about them. When Destiny and I rehearsed our “chapter” handoffs, we developed little jingles that we’d both sing together onstage; it indicated to the audience that it was a planned transition in the content, and tied our independent work together into a partnership. .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } Alternatively, you can give the presentation in a way that’s close to having a rehearsed conversation, rather than independently presenting discrete parts of the talk. In this case, you’ll both be sharing the spotlight at the same time, throughout the duration of the talk. Preparation is key, here, to make sure that you each understand what needs to be communicated, and you have a sense of who will be taking responsibility for communicating those different pieces of information. A poorly-prepared talk like this will look like the co-presenters are talking over each other, or hesitating awkwardly to give the other person more room to speak; the audience will feel how uncomfortable this is, and will probably be distracted from the talk content. Practice the talk the whole way through multiple times so you know what each person is planning on covering and how you want to interact with each other while you’re both holding microphones; also figure out how you’ll be standing in relation to each other. More on that next! Sharing the stage If you choose to give a talk with a partner, determine ahead of time how you’ll stand (or sit). For example, if you each take “chapters” or major sections of the presentation, ensure that it’s clear who the audience should focus their attention on. You could sit in a chair off to the side (or stand). I recommend placing yourself far enough away that you’re not distracting to the audience; you don’t want them watching you while your partner is speaking. If the audience can still see you, but their focus should be on your buddy, be sure to not look distracted; keep your eyes on your buddy, and don’t just open your laptop and ignore what’s happening! Feel free to smile, laugh, or react how the audience should be reacting as your partner is speaking. If you’re both sharing the spotlight at the same time and having a rehearsed conversation, make sure that your body language engages the audience and you’re not just speaking to each other, ignoring the folks watching. Watch this talk with Guy Podjarny and Assaf Hefetz who have partnered up to talk about security; they have clearly identified roles onstage, and remain engaged with the audience. Consider whether or not you will share a microphone, or if you will both be mic’d. (Be sure that the event organizer, or the A/V team, has a heads-up well in advance to ensure they have the equipment handy!) Also talk through how you’d like to handle Q&A time during or after the talk, especially if you have clear “chapters” where Q&A might happen naturally during a handoff. The more clarity you and your partner have about who is responsible for which pieces of information sharing, the more you can feel and appear prepared. Co-presenting does take a lot of preparation and requires a ton of communication between you and your partner. But the rewards can be awesome: double the brains onstage to help answer questions and communicate information, and a friendly face to help comfort you if you feel nervous. 2016 Lara Hogan larahogan 2016-12-06T00:00:00+00:00 https://24ways.org/2016/public-speaking-with-a-buddy/ process
296 Animation in Design Systems Our modern front-end workflow has matured over time to include design systems and component libraries that help us stay organized, improve workflows, and simplify maintenance. These systems, when executed well, ensure proper documentation of the code available and enable our systems to scale with reduced communication conflicts. But while most of these systems take a critical stance on fonts, colors, and general building blocks, their treatment of animation remains disorganized and ad-hoc. Let’s leverage existing structures and workflows to reduce friction when it comes to animation and create cohesive and performant user experiences. Understand the importance of animation Part of the reason we treat animation like a second-class citizen is that we don’t really consider its power. When users are scanning a website (or any environment or photo), they are attempting to build a spatial map of their surroundings. During this process, nothing quite commands attention like something in motion. We are biologically trained to notice motion: evolutionarily speaking, our survival depends on it. For this reason, animation when done well can guide your users. It can aid and reinforce these maps, and give us a sense that we understand the UX more deeply. We retrieve information and put it back where it came from instead of something popping in and out of place. “Where did that menu go? Oh it’s in there.” For a deeper dive into how animation can connect disparate states, I wrote about the Importance of Context-Shifting in UX Patterns for CSS-Tricks. An animation flow on mobile. Animation also aids in perceived performance. Viget conducted a study where they measured user engagement with a standard loading GIF versus a custom animation. Customers were willing to wait almost twice as long for the custom loader, even though it wasn’t anything very fancy or crazy. Just by showing their users that they cared about them, they stuck around, and the bounce rates dropped. 14 second generic loading screen.22 second custom loading screen. This also works for form submission. Giving your personal information over to an online process like a static form can be a bit harrowing. It becomes more harrowing without animation used as a signal that something is happening, and that some process is completing. That same animation can also entertain users and make them feel as though the wait isn’t as long. Eli Fitch gave a talk at CSS Dev Conf called: “Perceived Performance: The Only Kind That Really Matters”, which is one of my favorite talk titles of all time. In it, he discussed how we tend to measure things like timelines and network requests because they are more quantifiable–and therefore easier to measure–but that measuring how a user feels when visiting the site is more important and worth the time and attention. In his talk, he states “Humans over-estimate passive waits by 36%, per Richard Larson of MIT”. This means that if you’re not using animation to speed up how fast the wait time of a form submission loads, users are perceiving it to be much slower than the dev tools timeline is recording. Reign it in Unlike fonts, colors, and so on, we tend to add animation in as a last step, which leads to disorganized implementations that lack overall cohesion. If you asked a designer or developer if they would create a mockup or build a UI without knowing the fonts they were working with, they would dislike the idea. Not knowing the building blocks they’re working with means that the design can fall apart or the development can break with something so fundamental left out at the start. Good animation works the same way. The first step in reigning in your use of animation is to perform an animation audit. Look at all the places you are using animation on your site, or the places you aren’t using animation but probably should. (Hint: perceived performance of a loader on a form submission can dramatically change your bounce rates.) Not sure how to perform a good audit? Val Head has a great chapter on it in her book, Designing Interface Animations, which has of buckets of research and great ideas. Even some beautiful component libraries that have animation in the docs make this mistake. You don’t need every kind of animation, just like you don’t need every kind of font. This bloats our code. Ask yourself questions like: do you really need a flip 180 degree animation? I can’t even conceive of a place on a typical UI where that would be useful, yet most component libraries that I’ve seen have a mixin that does just this. Which leads to… Have an opinion Many people are confused about Material Design. They think that Material Design is Motion Design, mostly because they’ve never seen anyone take a stance on animation before and document these opinions well. But every time you use Material Design as your motion design language, people look at your site and think GOOGLE. Now that’s good branding. By using Google’s motion design language and not your own, you’re losing out on a chance to be memorable on your own website. What does having an opinion on motion look like in practice? It could mean you’ve decided that you never flip things. It could mean that your eases are always going to glide. In that instance, you would put your efforts towards finding an ease that looks “gliding” and pulling out any transform: scaleX(-1) animation you find on your site. Across teams, everyone knows not to spend time mocking up flipping animation (even if they’re working on an entirely different codebase), and to instead work on something that feels like it glides. You save time and don’t have to communicate again and again to make things feel cohesive. Create good developer resources Sometimes people don’t incorporate animation into a design system because they aren’t sure how, beyond the base hover states. All animation properties can be broken into interchangeable pieces. This allows developers and designers alike to mix and match and iterate quickly, while still staying in the correct language. Here are some recommendations (with code and a demo to follow): Create timing units, similar to h1, h2, h3. In a system I worked on recently, I called these t1, t2, t3. T1 would be reserved for longer pieces, down to t5 which is a bit like h5 in that it’s the default (usually around .25 seconds or thereabouts). Keep animation easings for entrance, exit, entrance emphasis and exit emphasis that people can commonly refer to. This, and the animation-fill-mode, are likely to be the only two properties that can be reused for the entrance and exit of the animation. Use the animation-name property to define the keyframes for the animation itself. I would recommend starting with 5 or 6 before making a slew of them, and see if you need more. Writing 30 different animations might seem like a nice resource, but just like your color palette having too many can unnecessarily bulk up your codebase, and keep it from feeling cohesive. Think critically about what you need here. See the Pen Modularized Animation for Component Libraries by Sarah Drasner (@sdras) on CodePen. The example above is pared-down, but you can see how in a robust system, having pieces that are interchangeable cached across the whole system would save time for iterations and prototyping, not to mention make it easy to make adjustments for different feeling movement on the same animation easily. One low hanging fruit might be a loader that leads to a success dialog. On a big site, you might have that pattern many times, so writing up a component that does only that helps you move faster while also allowing you to really zoom in and focus on that pattern. You avoid throwing something together at the last minute, or using a GIF, which are really heavy and mushy on retina. You can make singular pieces that look really refined and are reusable. React and Vue Implementations are great for reusable components, as you can create a building block with a common animation pattern, and once created, it can be a resource for all. Remember to take advantage of things like props to allow for timing and easing adjustments like we have in the previous example! Responsive At the very least we should ensure that interaction also works well on mobile, but if we’d like to create interactions that take advantage of all of the gestures mobile has to offer, we can use libraries like zingtouch or hammer to work with swipe or multiple finger detection. With a bit of work, these can all be created through native detection as well. Responsive web pages can specify initial-scale=1.0 in the meta tag so that the device is not waiting the required 300ms on the secondary tap before calling action. Interaction for touch events must either start from a larger touch-target (40px × 40px or greater) or use @media(pointer:coarse) as support allows. Buy-in Sometimes people don’t create animation resources simply because it gets deprioritized. But design systems were also something we once had to fight for, too. This year at CSS Dev Conf, Rachel Nabors demonstrated how to plot out animation wants vs. needs on a graph (reproduced with her permission) to help prioritize them: This helps people you’re working with figure out the relative necessity and workload of the addition of these animations and think more critically about it. You’re also more likely to get something through if you’re proving that what you’re making is needed and can be reused. Good compromises can be made this way: “we’re not going to go all out and create an animated ‘About Us’ page like you wanted, but I suppose we can let our users know their contact email went through with a small progress and success notification.” Successfully pushing smaller projects through helps build trust with your team, and lets them see what this type of collaboration can look like. This builds up the type of relationship necessary to push through projects that are more involved. It can’t be overstressed that good communication is key. Get started! With these tools and good communication, we can make our codebases more efficient, performant, and feel better for our users. We can enhance the user experience on our sites, and create great resources for our teams to allow them to move more quickly while innovating beautifully. 2016 Sarah Drasner sarahdrasner 2016-12-16T00:00:00+00:00 https://24ways.org/2016/animation-in-design-systems/ code
295 Internet of Stranger Things This year I’ve been running a workshop about using JavaScript and Node.js to work with all different kinds of electronics on the Raspberry Pi. So especially for 24 ways I’m going to show you how I made a very special Raspberry Pi based internet connected project! And nothing says Christmas quite like a set of fairy lights connected to another dimension1. What you’ll see You can rig up the fairy lights in your home, with the scrawly letters written under each one. The people from the other side (i.e. the internet) will be able to write messages to you from their browser in real time. In fact why not try it now; check this web page. When you click the lights in your browser, my lights (and yours) will turn on and off in real life! (There may be a queue if there are lots of people accessing it, hit the “Send a message” button and wait your turn.) It’s all done with JavaScript, using Node.js running on both the Raspberry Pi and on the server. I’m using WebSockets to communicate in real time between the browser, server and Raspberry Pi. What you’ll need Raspberry Pi any of the following models: Zero (will need straight male header pins soldered2 and Micro USB OTG adaptor), A+, B+, 2, or 3 Micro SD card at least 4Gb Class 10 speed3 Micro USB power supply at least 2A USB Wifi dongle (unless you have a Pi 3 - that has wifi built in). Addressable fairy lights Logic level shifter (with pins soldered unless you want to do it!) Breadboard Jumper wires (3x male to male and 4x female to male) Optional but recommended Base board to hold the Pi and Breadboard (often comes with a breadboard!) Find links for where to buy all of these items that goes along with this tutorial. The total price should be around $1004. Setting up the Raspberry Pi You’ll need to install the SD card for the Raspberry Pi. You’ll find a link to download a disk image on the support document, ready-made with the Raspbian version of Linux, along with Node.js and all the files you need. Download it and write it to the SD card using the fantastic free software Etcher5. Next up you have to configure the wifi details on the SD card. If you plug the card into your computer you should see a drive called BOOT. There’s a text file on there called wpa_supplicant.conf. Open it up in your favourite text editor and replace mywifi and mypassword with your wifi details6. network={ ssid="mywifi" psk="mypassword" } Save the file, eject the card from your computer and plug it into the Raspberry Pi. If you have a base board or holder for the Raspberry Pi, attach it now. Then connect the wifi USB dongle7 and power supply, but don’t plug it in yet! Wiring! Time to wire everything up! First of all, push the Logic Level Converter into the middle of the breadboard: Logic Level Converter The logic level converter may be labelled differently from the one in the diagram but the pins are usually exactly the same internally. I would just make sure the pins marked HV (High Voltage) are on the bottom and LV (Low Voltage) are on the top. Raspberry Pi pins only output 3.3v but the lights need 5v. That’s why we need the logic level converter in there to boost up the signal. Connect the first two wires between the Raspberry Pi pins and the breadboard: Note that the pins on the Raspberry Pi are male, so you need a female to male jumper wire to connect between them and the breadboard. The colours don’t have to match but it’s easier to follow (and check) if you use the same ones as in the diagram. Then the next two: This is what you should have so far: Lights Now to connect the lights! My ones have a connector with three holes in it that I can push jumper wires into, and hopefully yours will too! So I used the male-to-male jumper wires to connect them to the breadboard. Make sure that you connect the right end of the lights, mine has a male connector at the wrong end so it’s impossible to do this, but double check. Also make sure that the holes in the light connector are the same as mine. To do this, follow the wires from the connector to the first light and look at the circuit board inside. You should just about be able to make out the connections labelled + (sometimes 5V, V+ or VCC), GND (or ‘-’ or G) and DI (sometimes DIN for data in). You can just about make out the +, DI and GND on this picture. Note that on the other side of the board there is a DO for data out - that’s what takes the data along to the chip in the next light. Make sure that you’re plugging into the data-in and not the data-out! That’s it! Everything’s plugged in and ready to go! But before you plug power into your Pi, double check all your wires and make sure they’re exactly right! You could damage your Raspberry Pi if it is not wired correctly. So triple check! The Moment of Truth! Plug in the Raspberry Pi and wait around a minute or two for it to boot up. If all is well, the lights should strobe rainbow colours for one second - that’s your confirmation that it’s connected to my WebSocket server and ready to receive messages from the upside-down! However, if the first light in the string is pulsing red, it means that you’re not connected to the internet. So check the Troubleshooting section of the support document. If it’s pulsing green then you’re connected to the internet but can’t connect to my server. It must have gone down. Sorry! The code will keep trying so leave it running and maybe it’ll come back up. Rig up the lights! Fix the lights up on the wall however you want, pins, nails, tape. I’ve used cable clips. Just be careful! I’m using a 50 light string so I’ve programmed it to use the lights at the end for the letters. That way I have just under half the string to extend down to the floor where I can keep the Raspberry Pi. Check the photo here to see how the lights line up, note that there are spare unused lights in-between each row: Now visit lights.seb.ly and you’ll see this : If you’re the only one online you’ll have direct connection to the lights and any letter you click on will light up both in the browser and in real life. If there are other people there, you’ll need to click the button to join the queue and wait your turn. How it works - the geeky details! Electronics: The pins on the Raspberry Pi are known as GPIO pins, general-purpose input/output. You can connect a wide variety of electronic components to them, LED lights, buttons, switches, and sensors. You can turn the power to the pins on and off using Node.js (or Python, if you prefer). Addressable LEDs or “Neopixels” We’re only using one GPIO pin on the Raspberry Pi (the other connections are 5V, 3.3V and ground) and that single pin is controlling all of the lights in the string. The code turns the pin on and off really fast in strictly timed morse-code-like dots and dashes to transmit binary data. The chips attached to each LED decode the binary and adjust the output to the LED accordingly. That chip then sends the data on to the next light in the string. The chips on each light are the WS2811, part of the WS281x family that come in a multitude of different form factors and are often packaged with tiny LEDs in a single component. They are commonly referred to as Neopixels8 and I used them on my Laser Light Synths project. Neopixels with the chip and the LED all in one - it’s the white square shaped component and the darker square inside is the chip. These are only 5mm wide! A Laser Light Synth! Covered with around 800 super bright neopixels! Logic Level Converter The logic level converter is a really cheap and easy way to change the level from 3.3v to 5v and back again. You must be careful that you do not connect 5v into a GPIO pin or you will most likely damage the Raspberry Pi processor chip. Power Neopixels can often draw a lot of current so you need to be careful how you power them. I’ve measured the current draw from the string to be less than 800mA so you should be fine wired directly to the 5V output. But if you use more lights or have them all on really bright at once, you’ll need to use a separate 5V power supply. If you want to learn more, check out Adafruit’s Neopixel Uberguide. Node.js There are two Node.js apps running here, one on the Raspberry Pi and one on my server. You can see the code on my GitHub at github.com/sebleedelisle/stranger-lights for the Raspberry Pi and github.com/sebleedelisle/stranger-lights-server for the server. And they’re hosted on npm as stranger-lights and stranger-lights-server. The server side code sets up a standard web server to deliver the HTML for the web interface. It also sets up a WebSocket server that allows for real-time communication between the browser and the server. This server code also manages the queue and who is in control of the lights at any given time. WebSockets I’m using the excellent Socket.io library to manage the WebSocket connection. Both the browser and the Raspberry Pi Node.js app connects to my WebSocket server. When you click on a letter in the browser, a message is sent to the server, which forwards it to the connected Raspberry Pi clients and also all the web browsers9. The Raspberry Pi code The Node.js app runs automatically on startup, and I made this happen by adding this to the /etc/rc.local file: node /home/pi/strangerthings/client.js > /dev/null & Anything in the rc.local file gets executed when the Pi boots up and this line of code runs the Node.js app and routes its output to nowhere (ie /dev/null). The & means that it runs it in the background and doesn’t hold up the boot process. Working with the Raspberry Pi headless You might know that when a computer has no screen or keyboard, you would refer to it as “running headless”. So just like most web servers, you need to configure it over the network with ssh10. If you’re on a mac you can find your Pi on the network through the name raspberrypi.local11, otherwise you’ll need to find its IP address. There’s more on the guide to Remote Access instructions on the Raspberry Pi website. And if you’re very new to the terminal, I highly recommend this great online Linux command line tutorial. Improvements This is quite an early experiment and I’m sure I’ll discover lots of optimisations over the next few weeks, especially if the server gets a proper hammering today! But there are a few things you can do. Obviously I’ve just rigged up my lights with Post-it notes. It’d be a lot nicer to get a paint brush and try to recreate the Winona-in-a-manic-state text style. Where next? Finding quality resources about Node.js for electronics on the Pi can be somewhat hit and miss, but this is getting better all the time. Alternatively I am thinking about running some online courses, please let me know if that’s something you’d be interested in, or sign up to my mailing list at st4i.com. There are many many more resources for the Raspberry Pi with Python (gpiozero is a good place to start), so if that language works for you, you’ll be spoilt for choice! Also take a look at Arduino - it’s an incredibly popular platform for electronics and the internet is literally bursting with resources. I hope you enjoyed this little foray into the world of JavaScript electronics on the Raspberry Pi! If you get this working at home please let me know! Tweet me at @seb_ly. Not a particularly original idea, but I don’t think I’ve seen anyone do it quite like this before, ie using WebSockets, and Node.js on a Raspberry Pi. Other examples: Internet of Stranger Things, Strangerlights.com, and loads of examples on Instructables ↩︎ Video guide to soldering pins on to a Pi Zero and further soldering advice from Adafruit ↩︎ Slower cards will work but performance may suffer ↩︎ Or £5,000 in UK money. Sorry, Brexit joke :) ↩︎ You will need a card reader on your computer - most micro SD cards come with an adaptor that fits standard SD slots.  ↩︎ SSID and password should be all that you need but you can see all the config options on this wpa supplicant guide ↩︎ Raspberry Pi Zero will require the OTG to USB adaptor to attach the wifi dongle ↩︎ Thanks to Adafruit who invented the term neopixels so we don’t have to refer to them as WS281x any more! ↩︎ So you can see other people sending messages in the browser ↩︎ ssh is short for Secure Shell and is a way to connect to a remote computer and type in it just like you would in the terminal. ↩︎ You can change this default hostname using raspi-config ↩︎ 2016 Seb Lee-Delisle sebleedelisle 2016-12-01T00:00:00+00:00 https://24ways.org/2016/internet-of-stranger-things/ code
294 New Tricks for an Old Dog Much of my year has been spent helping new team members find their way around the expansive and complex codebase that is the TweetDeck front-end, trying to build a happy and productive group of people around a substantial codebase with many layers of legacy. I’ve loved doing this. Everything from writing new documentation, drawing diagrams, and holding technical architecture sessions teaches you something you didn’t know or exposes an area of uncertainty that you can go work on. In this article, I hope to share some experiences and techniques that will prove useful in your own situation and that you can impress your friends in some new and exciting ways! How do you do, fellow kids? To start with I’d like to introduce you to our JavaScript framework, Flight. Right now it’s used by twitter.com and TweetDeck although, as a company, Twitter is largely moving to React. Over time, as we used Flight for more complex interfaces, we found it wasn’t scaling with us. Composing components into trees was fiddly and often only applied for a specific parent-child pairing. It seems like an obvious feature with hindsight, but it didn’t come built-in to Flight, and it made reusing components a real challenge. There was no standard way to manage the state of a component; they all did it slightly differently, and the technique often varied by who was writing the code. This cost us in maintainability as you just couldn’t predict how a component would be built until you opened it. Making matters worse, Flight relied on events to move data around the application. Unfortunately, events aren’t good for giving structure to complex logic. They jump around in a way that’s hard to understand and debug, and force you to search your code for a specific string — the event name‚ to figure out what’s going on. To find fixes for these problems, we looked around at other frameworks. We like React for it’s simple, predictable state management and reactive re-render flow, and Elm for bringing strict functional programming to everyone. But when you have lots of existing code, rewriting or switching framework is a painful and expensive option. You have to understand how it will interact with your existing code, how you’ll test it alongside existing code, and how it will affect the size and performance of the application. This all takes time and effort! Instead of planning a rewrite, we looked for the ideas hidden within other frameworks that we could reapply in our own situation or bring to the tools we already were using. Boiled down, what we liked seemed quite simple: Component nesting & composition Easy, predictable state management Normal functions for data manipulation Making these ideas applicable to Flight took some time, but we’re in a much better place now. Through persistent trial-and-error, we have well documented, testable and standard techniques for creating complex component hierarchies, updating and reacting to state changes, and passing data around the app. While the specifics of our situation and Flight aren’t really important, this experience taught me something: Distill good tech into great ideas. You can apply great ideas anywhere. You don’t have to use cool kids’ latest framework, hottest build tool or fashionable language to benefit from them. If you can identify a nugget of gold at the heart of it all, why not use it to improve what you have already? Times, they are a changin’ Apart from stealing ideas from the new and shiny, how can we keep make the most of improved tooling and techniques? Times change and so should the way we write code. Going back in time a bit, TweetDeck used some slightly outmoded tools for building and bundling. Without a transpiler like Babel we were missing out new language features, and without a more advanced build tools like Webpack, every module’s source was encased in AMD boilerplate. In fact, we found ourselves with a mix of both AMD syntaxes: define(["lodash"], function (_) { // . . . }); define(function (require) { var _ = require("lodash"); // . . . }); This just wouldn’t do. And besides, what we really wanted was CommonJS, or even ES2015 module syntax: import _ from "lodash"; These days we’re using Babel, Webpack, ES2015 modules and many new language features that make development just… better. But how did we get there? To explain, I want to introduce you to codemods and jscodeshift. A codemod is a large-scale refactor of a whole codebase, often mechanical or repetitive. Think of renaming a module or changing an API like URL("...") to new URL("..."). jscodeshift is a toolkit for running automated codemods, where you express a code transformation using code. The automated codemod operates on each file’s syntax tree – a data-structure representation of the code — finding and modifying in place as it goes. Here’s an example that renames all instances of the variable foo to bar: module.exports = function (fileInfo, api) { return api .jscodeshift(fileInfo.source) .findVariableDeclarators('foo') .renameTo('bar') .toSource(); }; It’s a seriously powerful tool, and we’ve used it to write a series of codemods that: rename modules, unify our use of AMD to a single syntax, transition from one testing framework to another, and switch from AMD to CommonJS. These changes can be pretty huge and far-reaching. Here’s an example commit from when we switched to CommonJS: commit 8f75de8fd4c702115c7bf58febba1afa96ae52fc Date: Tue Jul 12 2016 Run AMD -> CommonJS codemod 418 files changed, 47550 insertions(+), 48468 deletions(-) Yep, that’s just under 50k lines changed, tested, merged and deployed without any trouble. AMD be gone! From this step-by-step approach, using codemods to incrementally tweak and improve, we extracted a little codemod recipe for making significant, multi-stage changes: Find all the existing patterns Choose the two most similar Unify with a codemod Repeat. For example: For module loading, we had 2 competing AMD patterns plus some use of CommonJS The two AMD syntaxes were the most similar We used a codemod to move to unify the AMD patterns Later we returned to AMD to convert it to CommonJS It’s worked for us, and if you’d like to know more about codemods then check out Evolving Complex Systems Incrementally by Facebook engineer, Christoph Pojer. Welcome aboard! As TweetDeck has gotten older and larger, the amount of things a new engineer has to learn about has exploded. The myriad of microservices that manage our data and their layers of authentication, security and business logic around them make for an overwhelming amount of information to hand to a newbie. Inspired by Amy’s amazing Guide to the Care and Feeding of Junior Devs, we realised it was important to take time to design our onboarding that each of our new hires go through to make the most of their first few weeks. Joining a new company, team, or both, is stressful and uncomfortable. Everything you can do to help a new hire will be valuable to them. So please, take time to design your onboarding! And as you build up an onboarding process, you’ll create things that are useful for more than just new hires; it’ll force you to write documentation, for example, in a way that’s understandable for people who are unfamiliar with your team, product and codebase. This can lead to more outside contributions: potential contributors feel more comfortable getting set up on your product without asking for help. This is something that’s taken for granted in open source, but somehow I think we forget about it in big companies. After all, better documentation is just a good thing. You will forget things from time to time, and you’d be surprised how often the “beginner” docs help! For TweetDeck, we put together system and architecture diagrams, and one-pager explanations of important concepts: What are our dependencies? Where are the potential points of failure? Where does authentication live? Storage? Caching? Who owns “X”? Of course, learning continues long after onboarding. The landscape is constantly shifting; old services are deprecated, new APIs appear and what once true can suddenly be very wrong. Keeping up with this is a serious challenge, and more than any one person can track. To address this, we’ve thought hard about our knowledge sharing practices across the whole team. For example, we completely changed the way we do code review. In my opinion, code review is the single most effective practice you can introduce to share knowledge around, and build the quality and consistency of your team’s work. But, if you’re not doing it, here’s my suggestion for getting started: Every pull request gets a +1 from someone else. That’s all — it’s very light-weight and easy. Just ask someone to have a quick look over your code before it goes into master. At Twitter, every commit gets a code review. We do a lot of reviewing, so small efficiency and effectiveness improvements make a big difference. Over time we learned some things: Don’t review for more than hour 1 Keep reviews smaller than ~400 lines 2 Code review your own code first 2 After an hour, and above roughly 400 lines, your ability to detect issues in a code review starts to decrease. So review little and often. The gaps around lunch, standup and before you head home are ideal. And remember, if someone’s put code up for a review, that review is blocking them doing other work. It’s your job to unblock them. On TweetDeck, we actually try to keep reviews under 250 lines. It doesn’t sound like much, but this constraint applies pressure to make smaller, incremental changes. This makes breakages easier to detect and roll back, and leads to a very natural feature development process that encourages learning and iteration. But the most important thing I’ve learned personally is that reviewing my own code is the best way to spot issues. I try to approach my own reviews the way I approach my team’s: with fresh, critical eyes, after a break, using a dedicated code review tool. It’s amazing what you can spot when you put a new in a new interface around code you’ve been staring at for hours! And yes, this list features science. The data backs up these conclusions, and if you’d like to learn more about scientific approaches to software engineering then I recommend you buy Making Software: What Really Works, and Why We Believe It. It’s ace. For more dedicated information sharing, we’ve introduced regular seminars for everyone who works on a specific area or technology. It works like this: a team-member shares or teaches something to everyone else, and next time it’s someone else’s turn. Giving everyone a chance to speak, and encouraging a wide range of topics, is starting to produce great results. If you’d like to run a seminar, one thing you could try to get started: run a point at the thing you least understand in our architecture session — thanks to James for this idea. And guess what… your onboarding architecture diagrams will help (and benefit from) this! More, please! There’s a few ideas here to get you started, but there are even more in a talk I gave this year called Frontend Archaeology, including a look at optimising for confidence with front-end operations. And finally, thanks to Amy for proof reading this and to Passy for feedback on the original talk. Dunsmore et al. 2000. Object-Oriented Inspection in the Face of Delocalisation. Beverly, MA: SmartBear Software. ↩ Cohen, Jason. 2006. Best Kept Secrets of Peer Code Review. Proceedings of the 22nd ICSE 2000: 467-476. ↩ ↩ 2016 Tom Ashworth tomashworth 2016-12-18T00:00:00+00:00 https://24ways.org/2016/new-tricks-for-an-old-dog/ code
293 A Favor for Your Future Self We tend to think about the future when we build things. What might we want to be able to add later? How can we refactor this down the road? Will this be easy to maintain in six months, a year, two years? As best we can, we try to think about the what-ifs, and build our websites, systems, and applications with this lens. We comment our code to explain what we knew at the time and how that impacted how we built something. We add to-dos to the things we want to change. These are all great things! Whether or not we come back to those to-dos, refactor that one thing, or add new features, we put in a bit of effort up front just in case to give us a bit of safety later. I want to talk about a situation that Past Alicia and Team couldn’t even foresee or plan for. Recently, the startup I was a part of had to remove large sections of our website. Not just content, but entire pages and functionality. It wasn’t a very pleasant experience, not only for the reason why we had to remove so much of what we had built, but also because it’s the ultimate “I really hope this doesn’t break something else” situation. It was a stressful and tedious effort of triple checking that the things we were removing weren’t dependencies elsewhere. To be honest, we wouldn’t have been able to do this with any amount of success or confidence without our test suite. Writing tests for code is one of those things that developers really, really don’t want to do. It’s one of the easiest things to cut in the development process, and there’s often a struggle to have developers start writing tests in the first place. One of the best lessons the web has taught us is that we can’t, in good faith, trust the happy path. We must make sure ourselves, and our users, aren’t in a tough spot later on because we only thought of the best case scenarios. JavaScript Regardless of your opinion on whether or not everything needs to be built primarily with JavaScript, if you’re choosing to build a JavaScript heavy app, you absolutely should be writing some combination of unit and integration tests. Unit tests are for testing extremely isolated and small pieces of code, which we refer to as the units themselves. Great for reused functions and small, scoped areas, this is the closest you examine your code with the testing microscope. For example, if we were to build a calculator, the most minute piece we could test could be the basic operations. /* * This example uses a test framework called Jasmine */ describe("Calculator Operations", function () { it("Should add two numbers", function () { // Say we have a calculator Calculator.init(); // We can run the function that does our addition calculation... var result = Calculator.addNumbers(7,3); // ...and ensure we're getting the right output expect(result).toBe(10); }); }); Even though these teeny bits work in isolation, we should ensure that connecting the large pieces work, as well. This is where integration tests excel. These tests ensure that two or more different areas of code, that may not directly know about each other, still behave in expected ways. Let’s build upon our calculator - we may want the operations to be saved in memory after a calculation runs. This isn’t as suited for a unit test because there are a few other moving pieces involved in the process (the calculations, checking if the result was an error, etc.). it(“Should remember the last calculation”, function () { // Run an operation Calculator.addNumbers(7,10); // Expect something else to have happened as a result expect(Calculator.updateCurrentValue).toHaveBeenCalled(); expect(Calculator.currentValue).toBe(17); }); Unit and integration tests provide assurance that your hand-rolled JavaScript should, for the most part, never fail in a grand fashion. Although it still might happen, you could be able to catch problems way sooner than without a test suite, and hopefully never push those failures to your production environment. Interfaces Regardless of how you’re building something, it most definitely has some kind of interface. Whether you’re using a very barebones structure, or you’re leveraging a whole design system, these things can be tested as well. Acceptance testing helps us ensure that users can get from point A to point B within our web things, which can provide assurance that major features are always functioning properly. By simulating user input and data entry, we can go through whole user workflows to test for both success and failure scenarios. These are not necessarily for simulating edge-case scenarios, but rather ensuring that our core offerings are stable. For example, if your site requires signup, you want to make sure the workflow is behaving as expected - allowing valid information to go through signup, while invalid information does not let you progress. /* * This example uses Jasmine along with an add-on called jasmine-integration */ describe("Acceptance tests", function () { // Go to our signup page var page = visit("/signup"); // Fill our signup form with invalid information page.fill_in("input[name='email']", "Not An Email"); page.fill_in("input[name='name']", "Alicia"); page.click("button[type=submit]"); // Check that we get an expected error message it("Shouldn't allow signup with invalid information", function () { expect(page.find("#signupError").hasClass("hidden")).toBeFalsy(); }); // Now, fill our signup form with valid information page.fill_in("input[name='email']", "thisismyemail@gmail.com"); page.fill_in("input[name='name']", "Gerry"); page.click("button[type=submit]"); // Check that we get an expected success message and the error message is hidden it("Should allow signup with valid information", function () { expect(page.find("#signupError").hasClass("hidden")).toBeTruthy(); expect(page.find("#thankYouMessage").hasClass("hidden")).toBeFalsy(); }); }); In terms of visual design, we’re now able to take snapshots of what our interfaces look like before and after any code changes to see what has changed. We call this visual regression testing. Rather than being a pass or fail test like our other examples thus far, this is more of an awareness test, intended to inform developers of all the visual differences that have occurred, intentional or not. Developers may accidentally introduce a styling change or fix that has unintended side effects on other areas of a website - visual regression testing helps us catch these sooner rather than later. These do require a bit more consistent grooming than other tests, but can be valuable in major CSS refactors or if your CSS is generally a bit like Jenga. Tools like PhantomCSS will take screenshots of your pages, and do a visual comparison to check what has changed between two sets of images. The code would look something like this: /* * This example uses PhantomCSS */ casper.start("/home").then(function(){ // Initial state of form phantomcss.screenshot("#signUpForm", "sign up form"); // Hit the sign up button (should trigger error) casper.click("button#signUp"); // Take a screenshot of the UI component phantomcss.screenshot("#signUpForm", "sign up form error"); // Fill in form by name attributes & submit casper.fill("#signUpForm", { name: "Alicia Sedlock", email: "alicia@example.com" }, true); // Take a second screenshot of success state phantomcss.screenshot("#signUpForm", "sign up form success"); }); You run this code before starting any development, to create your baseline set of screen captures. After you’ve completed a batch of work, you run PhantomCSS again. This will create a second batch of screenshots, which are then put through an image comparison tool to display any differences that occurred. Say you changed your margins on our form elements – your image diff would look something like this: This is a great tool for ensuring not just your site retains its expected styling, but it’s also great for ensuring nothing accidentally changes in the living style guide or modular components you may have developed. It’s hard to keep eagle eyes on every visual aspect of your site or app, so visual regression testing helps to keep these things monitored. Conclusion The shape and size of what you’re testing for your site or app will vary. You may not need lots of unit or integration tests if you don’t write a lot of JavaScript. You may not need visual regression testing for a one page site. It’s important to assess your codebase to see which tests would provide the most benefit for you and your team. Writing tests isn’t a joy for most developers, myself included. But I end up thanking Past Alicia a lot when there are tests, because otherwise I would have introduced a lot of issues into codebases. Shipping code that’s broken breaks trust with our users, and it’s our responsibility as developers to make sure that trust isn’t broken. Testing shouldn’t be considered a “nice to have” - it should be an integral piece of our workflow and our day-to-day job. 2016 Alicia Sedlock aliciasedlock 2016-12-03T00:00:00+00:00 https://24ways.org/2016/a-favor-for-your-future-self/ code
292 Watch Your Language! I’m bilingual. My first language is French. I learned English in my early 20s. Learning a new language later in life meant that I was able to observe my thought processes changing over time. It made me realize that some concepts can’t be expressed in some languages, while other languages express these concepts with ease. It also helped me understand the way we label languages. English: business. French: romance. Here’s an example of how words, or the absence thereof, can affect the way we think: In French we love everything. There’s no straightforward way to say we like something, so we just end up loving everything. I love my sisters, I love broccoli, I love programming, I love my partner, I love doing laundry (this is a lie), I love my mom (this is not a lie). I love, I love, I love. It’s no wonder French is considered romantic. When I first learned English I used the word love rather than like because I hadn’t grasped the difference. Needless to say, I’ve scared away plenty of first dates! Learning another language made me realize the limitations of my native language and revealed concepts I didn’t know existed. Without the nuances a given language provides, we fail to express what we really think. The absence of words in our vocabulary gets in the way of effectively communicating and considering ideas. When I lived in Montréal, most people in my circle spoke both French and English. I could switch between them when I could more easily express an idea in one language or the other. I liked (or should I say loved?) those conversations. They were meaningful. They were efficient. I’m quadrilingual. I code in Ruby, HTML/CSS, JavaScript, Python. In the past couple of years I have been lucky enough to write code in these languages at a massive scale. In learning Ruby, much like learning English, I discovered the strengths and limitations of not only the languages I knew but the language I was learning. It taught me to choose the right tool for the job. When I started working at Shopify, making a change to a view involved copy/pasting HTML and ERB from one view to another. The CSS was roughly structured into modules, but those modules were not responsive to different screen sizes. Our HTML was complete mayhem, and we didn’t consider accessibility. All this made editing views a laborious process. Grep. Replace all. Test. Ship it. Repeat. This wasn’t sustainable at Shopify’s scale, so the newly-formed front end team was given two missions: Make the app responsive (AKA Let’s Make This Thing Responsive ASAP) Make the view layer scalable and maintainable (AKA Let’s Build a Pattern Library… in Ruby) Let’s make this thing responsive ASAP The year was 2015. The Shopify admin wasn’t mobile friendly. Our browser support was set to IE10. We had the wind in our sails. We wanted to achieve complete responsiveness in the shortest amount of time. Our answer: container queries. It seemed like the obvious decision at the time. We would be able to set rules for each component in isolation and the component would know how to lay itself out on the page regardless of where it was rendered. It would save us a ton of development time since we wouldn’t need to change our markup, it would scale well, and we would achieve complete component autonomy by not having to worry about page layout. By siloing our components, we were going to unlock the ultimate goal of componentization, cutting the tie to external dependencies. We were cool. Writing the JavaScript handling container queries was my first contribution to Shopify. It was a satisfying project to work on. We could drop our components in anywhere and they would magically look good. It took us less than a couple weeks to push this to production and make our app mostly responsive. But with time, it became increasingly obvious that this was not as performant as we had hoped. It wasn’t performant at all. Components would jarringly jump around the page before settling in on first paint. It was only when we started using the flex-wrap: wrap CSS property to build new components that we realized we were not using the right language for the job. So we swapped out JavaScript container queries for CSS flex-wrapping. Even though flex wasn’t yet as powerful as we wanted it to be, it was still a good compromise. Our components stayed independent of the window size but took much less time to render. Best of all: they used CSS instead of relying on JavaScript for layout. In other words: we were using the wrong language to express our layout to the browser, when another language could do it much more simply and elegantly. Let’s build a pattern library… in Ruby In order to make our view layer maintainable, we chose to build a comprehensive library of helpers. This library would generate our markup from a single source of truth, allowing us to make changes system-wide, in one place. No. More. Grepping. When I joined Shopify it was a Rails shop freshly wounded by a JavaScript framework (See: Batman.js). JavaScript was like Voldemort, the language that could not be named. Because of this baggage, the only way for us to build a pattern library that would get buyin from our developers was to use Rails view helpers. And for many reasons using Ruby was the right choice for us. The time spent ramping developers up on the new UI Components would be negligible since the Ruby API felt familiar. The transition would be simple since we didn’t have to introduce any new technology to the stack. The components would be fast since they would be rendered on the server. We had a plan. We put in place a set of Rails tools to make it easy to build components, then wrote a bunch of sweet, sweet components using our shiny new tools. To document our design, content and front end patterns we put together an interactive styleguide to demonstrate how every component works. Our research and development department loved it (and still do)! We continue to roll out new components, and generally the project has been successful, though it has had its drawbacks. Since the Shopify admin is mostly made up of a huge number of forms, most of the content is static. For this reason, using server-rendered components didn’t seem like a problem at the time. With new app features increasing the amount of DOM manipulation needed on the client side, our early design decisions mean making requests to the server for each re-paint. This isn’t going to cut it. I don’t know the end of this story, because we haven’t written it yet. We’ve been exploring alternatives to our current system to facilitate the rendering of our components on the client, including React, Vue.js, and Web Components, but we haven’t determined the winner yet. Only time (and data gathering) will tell. Ruby is great but it doesn’t speak the browser’s language efficiently. It was not the right language for the job. Learning a new spoken language has had an impact on how I write code. It has taught me that you don’t know what you don’t know until you have the language to express it. Understanding the strengths and limitations of any programming language is fundamental to making good design decisions. At the end of the day, you make the best choices with the information you have. But if you still feel like you’re unable to express your thoughts to the fullest with what you know, it might be time to learn a new language. 2016 Annie-Claude Côté annieclaudecote 2016-12-10T00:00:00+00:00 https://24ways.org/2016/watch-your-language/ code
291 Information Literacy Is a Design Problem Information literacy, wrote Dr. Carol Kulthau in her 1987 paper “Information Skills for an Information Society,” is “the ability to read and to use information essential for everyday life”—that is, to effectively navigate a world built on “complex masses of information generated by computers and mass media.” Nearly thirty years later, those “complex masses of information” have only grown wilder, thornier, and more constant. We call the internet a firehose, yet we’re loathe to turn it off (or even down). The amount of information we consume daily is staggering—and yet our ability to fully understand it all remains frustratingly insufficient. This should hit a very particular chord for those of us working on the web. We may be developers, designers, or strategists—we may not always be responsible for the words themselves—but we all know that communication is much more than just words. From fonts to form fields, every design decision that we make changes the way information is perceived—for better or for worse. What’s more, the design decisions that we make feed into larger patterns. They don’t just affect the perception of a single piece of information on a single site; they start to shape reader expectations of information anywhere. Users develop cumulative mental models of how websites should be: where to find a search bar, where to look at contact information, how to filter a product list. And yet: our models fail us. Fundamentally, we’re not good at parsing information, and that’s troubling. Our experience of an “information society” may have evolved, but the skills Dr. Kuhlthau spoke of are even more critical now: our lives depend on information literacy. Patterns from words Let’s start at the beginning: with the words. Our choice of words can drastically alter a message, from its emotional resonance to its context to its literal meaning. Sometimes we can use word choice for good, to reinvigorate old, forgotten, or unfairly besmirched ideas. One time at a wedding bbq we labeled the coleslaw BRASSICA MIXTA so people wouldn’t skip it based on false hatred.— Eileen Webb (@webmeadow) November 27, 2016 We can also use clever word choice to build euphemisms, to name sensitive or intimate concepts without conjuring their full details. This trick gifts us with language like “the beast with two backs” (thanks, Shakespeare!) and “surfing the crimson wave” (thanks, Cher Horowitz!). But when we grapple with more serious concepts—war, death, human rights—this habit of declawing our language gets dangerous. Using more discrete wording serves to nullify the concepts themselves, euphemizing them out of sight and out of mind. The result? Politicians never lie, they just “misspeak.” Nobody’s racist, but plenty of people are “economically anxious.” Nazis have rebranded as “alt-right.” I’m not an asshole, I’m just alt-nice.— Andi Zeisler (@andizeisler) November 22, 2016 The problem with euphemisms like these is that they quickly infect everyday language. We use the words we hear around us. The more often we see “alt-right” instead of “Nazi,” the more likely we are to use that phrase ourselves—normalizing the term as well as the terrible ideas behind it. Patterns from sentences That process of normalization gets a boost from the media, our main vector of information about the world outside ourselves. Headlines control how we interpret the news that follows—even if the story contradicts it in the end. We hear the framing more clearly than the content itself, coloring our interpretation of the news over time. Even worse, headlines are often written to encourage clicks, not to convey critical information. When headline-writing is driven by sensationalism, it’s much, much easier to build a pattern of misinformation. Take this CBS News headline: “Donald Trump: ‘Millions’ voted illegally for Hillary Clinton.” The headline makes no indication that this an objectively false statement; instead, this word choice subtly suggestions that millions did, in fact, illegally vote for Hillary Clinton. Headlines like this are what make lying a worthwhile political strategy. https://t.co/DRjGeYVKmW— Binyamin Appelbaum (@BCAppelbaum) November 27, 2016 This is a deeply dangerous choice of words when headlines are the primary way that news is conveyed—especially on social media, where it’s much faster to share than to actually read the article. In fact, according to a study from the Media Insight Project, “roughly six in 10 people acknowledge that they have done nothing more than read news headlines in the past week.” If a powerful person asserts X there are 2 responsible ways to cover:1. “X is true”2. “Person incorrectly thinks X”Never “Person says X”— Helen Rosner (@hels) November 27, 2016 Even if we do, in fact, read the whole article, there’s no guarantee that we’re thinking critically about it. A study conducted by Stanford found that “82 percent of students could not distinguish between a sponsored post and an actual news article on the same website. Nearly 70 percent of middle schoolers thought they had no reason to distrust a sponsored finance article written by the CEO of a bank, and many students evaluate the trustworthiness of tweets based on their level of detail and the size of attached photos.” Friends: our information literacy is not very good. Luckily, we—workers of the web—are in a position to improve it. Sentences into design Consider the presentation of those all-important headlines in social media cards, as on Facebook. The display is a combination of both the card’s design and the article’s source code, and looks something like this: A large image, a large headline; perhaps a brief description; and, at the bottom, in pale gray, a source and an author’s name. Those choices convey certain values: specifically, they suggest that the headline and the picture are the entire point. The source is so deemphasized that it’s easy to see how fake news gains a foothold: daily exposure to this kind of hierarchy has taught us that sources aren’t important. And that’s the message from the best-case scenario. Not every article shows every piece of data. Take this headline from the BBC: “Wisconsin receives request for vote recount.” With no image, no description, and no author, there’s little opportunity to signal trust or provide nuance. There’s also no date—ever—which presents potentially misleading complications, especially in the context of “breaking news.” And lest you think dates don’t matter in the light-speed era of social media, take the headline, “Maryland sidesteps electoral college.” Shared into my feed two days after the US presidential election, that’s some serious news with major historical implications. But since there’s no date on this card, there’s no way for readers to know that the “Tuesday” it refers to was in 2007. Again, a design choice has made misinformation far too contagious. More recently, I posted my personal reaction to the death of Fidel Castro via a series of twenty tweets. Wanting to share my thoughts with friends and family who don’t use Twitter, I then posted the first tweet to Facebook. The card it generated was less than ideal: The information hierarchy created by this approach prioritizes the name of the Twitter user (not even the handle), along with the avatar. Not only does that create an awkward “headline” (at least when you include a full stop in your name), but it also minimizes the content of the tweet itself—which was the whole point. The arbitrary elevation of some pieces of content over others—like huge headlines juxtaposed with minimized sources—teaches readers that these values are inherent to the content itself: that the headline is the news, that the source is irrelevant. We train readers to stop looking for the information we don’t put in front of them. These aren’t life-or-death scenarios; they are just cases where design decisions noticeably dictate the perception of information. Not every design decision makes so obvious an impact, but the impact is there. Every single action adds to the pattern. Design with intention We can’t necessarily teach people to read critically or vet their sources or stop believing conspiracy theories (or start believing facts). Our reach is limited to our roles: we make websites and products for companies and colleges and startups. But we have more reach there than we might realize. Every decision we make influences how information is presented in the world. Every presentation adds to the pattern. No matter how innocuous our organization, how lowly our title, how small our user base—every single one of us contributes, a little bit, to the way information is perceived. Are we changing it for the better? While it’s always been crucial to act ethically in the building of the web, our cultural climate now requires dedicated, individual conscientiousness. It’s not enough to think ourselves neutral, to dismiss our work as meaningless or apolitical. Everything is political. Every action, and every inaction, has an impact. As Chappell Ellison put it much more eloquently than I can: Every single action and decision a designer commits is a political act. The question is, are you a conscious actor?— Chappell Ellison🤔 (@ChappellTracker) November 28, 2016 As shapers of information, we have a responsibility: to create clarity, to further understanding, to advance truth. Every single one of us must choose to treat information—and the society it builds—with integrity. 2016 Lisa Maria Martin lisamariamartin 2016-12-14T00:00:00+00:00 https://24ways.org/2016/information-literacy-is-a-design-problem/ content
290 Creating a Weekly Research Cadence Working on a product team, it’s easy to get hyper-focused on building features and lose sight of your users and their daily challenges. User research can be time-consuming to set up, so it often becomes ad-hoc and irregular, only performed in response to a particular question or concern. But without frequent touch points and opportunities for discovery, your product will stagnate and become less and less relevant. Setting up an efficient cadence of weekly research conversations will re-focus your team on user problems and provide a steady stream of insights for product development. As my team transitioned into a Lean process earlier this year, we needed a way to get more feedback from users in a short amount of time. Our users are internet marketers—always busy and often difficult to reach. Scheduling research took days of emailing back and forth to find mutually agreeable times, and juggling one-off conversations made it difficult to connect with more than one or two people per week. The slow pace of research was allowing additional risk to creep into our product development. I wanted to find a way for our team to test ideas and validate assumptions sooner and more often—but without increasing the administrative burden of scheduling. The solution: creating a regular cadence of research and testing that required a minimum of effort to coordinate. Setting up a weekly user research cadence accelerated our learning and built momentum behind strategic experiments. By dedicating time every week to talk to a few users, we made ongoing research a painless part of every weekly sprint. But increasing the frequency of our research had other benefits as well. With only five working days between sessions, a weekly cadence forced us to keep our work small and iterative. Committing to testing something every week meant showing work earlier and more often than we might have preferred—pushing us out of your comfort zone into a process of more rapid experimentation. Best of all, frequent conversations with users helped us become more customer-focused. After just a few weeks in a consistent research cadence, I noticed user feedback weaving itself through our planning and strategy sessions. Comments like “Remember what Jenna said last week, about not being able to customize her lists?” would pop up as frequent reference points to guide our decisions. As discussions become less about subjective opinions and more about responding to user needs, we saw immediate improvement in the quality of our solutions. Establishing an efficient recruitment process The key to creating a regular cadence of ongoing user research is an efficient recruitment and scheduling process—along with a commitment to prioritize the time needed for research conversations. This is an invaluable tool for product teams (whether or not they follow a Lean process), but could easily be adapted for content strategy teams, agency teams, a UX team of one, or any other project that would benefit from short, frequent conversations with users. The process I use requires a few hours of setup time at the beginning, but pays off in better learning and better releases over the long run. Almost any team could use this as a starting point and adapt it to their own needs. Pick a dedicated time each week for research In order to make research a priority, we started by choosing a time each week when everyone on the product team was available. Between stand-ups, grooming sessions, and roadmap reviews, it wasn’t easy to do! Nevertheless, it’s important to include as many people as possible in conversations with your users. Getting a second-hand summary of research results doesn’t have the same impact as hearing someone describe their frustrations and concerns first-hand. The more people in the room to hear those concerns, the more likely they are to become priorities for your team. I blocked off 2 hours for research conversations every Thursday afternoon. We make this time sacred, and never schedule other meetings or work across those hours. Divide your time into several research slots After my weekly cadence was set, I divided the time into four 20-minute time slots. Twenty minutes is long enough for us to ask several open-ended questions or get feedback on a prototype, without being a burden on our users’ busy schedules. Depending on your work, you may need schedule longer sessions—but beware the urge to create blocks that last an hour or more. A weekly research cadence is designed to facilitate rapid, ongoing feedback and testing; it should force you to talk to users often and to keep your work small and iterative. Projects that require longer, more in-depth testing will probably need a dedicated research project of their own. I used the scheduling software Calendly to create interview appointments on a calendar that I can share with users, and customized the confirmation and reminder emails with information about how to access our video conferencing software. (Most of our research is done remotely, but this could be set up with details for in-person meetings as well.) Automating these emails and reminders took a little bit of time to set up, but was worth it for how much faster it made the process overall. Invite users to sign up for a time that’s convenient for them With a calendar set up and follow-up emails automated, it becomes incredibly easy to schedule research conversations. Each week, I send a short email out to a small group of users inviting them to participate, explaining that this is a chance to provide feedback that will improve our product or occasionally promoting the opportunity to get a sneak peek at new features we’re working on. The email includes a link to the Calendly appointments, allowing users who are interested to opt in to a time that fits their schedule. Setting up appointments the first go around involved a bit of educated guessing. How many invitations would it take to fill all four of my weekly slots? How far in advance did I need to recruit users? But after a few weeks of trial and error, I found that sending 12-16 invitations usually allows me to fill all four interview slots. Our users often have meetings pop up at short notice, so we get the best results when I send the recruiting email on Tuesday, two days before my research block. It may take a bit of experimentation to fine tune your process, but it’s worth the effort to get it right. (The worst thing that’s happened since I began recruiting this way was receiving emails from users complaining that there were no open slots available!) I can now fill most of an afternoon with back-to-back user research sessions just by sending just one or two emails each week, increasing our research pace while leaving plenty time to focus on discovery and design. Getting the most out of your research sessions As you get comfortable with the rhythm of talking to users each week, you’ll find more and more ways to get value out of your conversations. At first, you may prefer to just show work in progress—such as mockups or a simple prototype—and ask open-ended questions to measure user reaction. When you begin new projects, you may want to use this time to research behavior on existing features—either watching participants as they use part of your product or asking them to give an account of a recent experience in your app. You may even want to run more abstracted Lean experiments, if that’s the best way to validate the assumptions your team is working from. Whatever you do, plan some time a day or two later to come back together and review what you’ve learned each week. Synthesizing research outcomes as a group will help keep your team in alignment and allow each person to highlight what they took away from each conversation. Over time, you may find that the pace of weekly user research becomes more exhausting than energizing, especially if the responsibility for scheduling and planning falls on just one person. Don’t allow yourself to get burned out; a healthy research cadence should also include time to rest and reflect if the pace becomes too rapid to sustain. Take breaks as needed, then pick up the pace again as soon as you’re ready. 2016 Wren Lanier wrenlanier 2016-12-02T00:00:00+00:00 https://24ways.org/2016/creating-a-weekly-research-cadence/ ux
289 Front-End Developers Are Information Architects Too The theme of this year’s World IA Day was “Information Everywhere, Architects Everywhere”. This article isn’t about what you may consider an information architect to be: someone in the user-experience field, who maybe studied library science, and who talks about taxonomies. This is about a realisation I had a couple of years ago when I started to run an increasing amount of usability-testing sessions with people who have disabilities: that the structure, labelling, and connections that can be made in front-end code is information architecture. People’s ability to be successful online is unequivocally connected to the quality of the code that is written. Places made of information In information architecture we talk about creating places made of information. These places are made of ones and zeros, but we talk about them as physical structures. We talk about going onto a social media platform, posting in blogs, getting locked out of an environment, and building applications. In 2002, Andrew Hinton stated: People live and work in these structures, just as they live and work in their homes, offices, factories and malls. These places are not virtual: they are as real as our own minds. 25 Theses We’re creating structures which people rely on for significant parts of their lives, so it’s critical that we carry out our work responsibly. This means we must use our construction materials correctly. Luckily, our most important material, HTML, has a well-documented specification which tells us how to build robust and accessible places. What is most important, I believe, is to understand the semantics of HTML. Semantics The word “semantic” has its origin in Greek words meaning “significant”, “signify”, and “sign”. In the physical world, a structure can have semantic qualities that tell us something about it. For example, the stunning Westminster Abbey inspires awe and signifies much about the intent and purpose of the structure. The building’s size; the quality of the stone work; the massive, detailed stained glass: these are all signs that this is a building meant for something the creators deemed important. Alternatively consider a set of large, clean, well-positioned, well-lit doors on the ground floor of an office block: they don’t need an “entrance” sign to communicate their use and to stop people trying to use a nearby fire exit to get into the building. The design of the doors signify their usage. Sometimes a more literal and less awe-inspiring approach to communicating a building’s purpose happens, but the affect is similar: the building is signifying something about its purpose. HTML has over 115 elements, many of which have semantics to signify structure and affordance to people, browsers, and assistive technology. The HTML 5.1 specification mentions semantics, stating: Elements, attributes, and attribute values in HTML are defined … to have certain meanings (semantics). For example, the <ol> element represents an ordered list, and the lang attribute represents the language of the content. HTML 5.1 Semantics, structure, and APIs of HTML documents HTML’s baked-in semantics means that developers can architect their code to signify structure, create relationships between elements, and label content so people can understand what they’re interacting with. Structuring and labelling information to make it available, usable, and understandable to people is what an information architect does. It’s also what a front-end developer does, whether they realise it or not. A brief introduction to information architecture We’re going to start by looking at what an information architect is. There are many definitions, and I’m going to quote Richard Saul Wurman, who is widely regarded as the father of information architecture. In 1976 he said an information architect is: the individual who organizes the patterns inherent in data, making the complex clear; a person who creates the structure or map of information which allows others to find their personal paths to knowledge; the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding, and the science of the organization of information. Of Patterns And Structures To me, this clearly defines any developer who creates code that a browser, or other user agent (for example, a screen reader), uses to create a structured, navigable place for people. Just as there are many definitions of what an information architect is, there are for information architecture itself. I’m going to use the definition from the fourth edition of Information Architecture For The World Wide Web, in which the authors define it as: The structural design of shared information environments. The synthesis of organization, labeling, search, and navigation systems within digital, physical, and cross-channel ecosystems. The art and science of shaping information products and experiences to support usability, findability, and understanding. Information Architecture For The World Wide Web, 4th Edition To me, this describes front-end development. Done properly, there is an art to creating robust, accessible, usable, and findable spaces that delight all our users. For example, at 2015’s State Of The Browser conference, Edd Sowden talked about the accessibility of <table>s. He discovered that by simply not using the semantically-correct <th> element to mark up <table> headings, in some situations browsers will decide that a <table> is being used for layout and essentially make it invisible to assistive technology. Another example of how coding practices can affect the usability and findability of content is shown by Léonie Watson in her How ARIA landmark roles help screen reader users video. By using ARIA landmark roles, people who use screen readers are quickly able to identify and jump to common parts of a web page. Our definitions of information architects and information architecture mention patterns, rules, organisation, labelling, structure, and relationships. There are numerous different models for how these elements get boiled down to their fundamentals. In his Understanding Context book, Andrew Hinton calls them Labels, Relationships, and Rules; Jorge Arango calls them Links, Nodes, And Order; and Dan Klyn uses Ontology, Taxonomy, and Choreography, which is the one we’re going to use. Dan defines these terms as: Ontology The definition and articulation of the rules and patterns that govern the meaning of what we intend to communicate. What we mean when we say what we say. Taxonomy The arrangements of the parts. Developing systems and structures for what everything’s called, where everything’s sorted, and the relationships between labels and categories Choreography Rules for interaction among the parts. The structures it creates foster specific types of movement and interaction; anticipating the way users and information want to flow and making affordance for change over time. We now have definitions of an information architect, information architecture, and a model of the elements of information architecture. But is writing HTML really creating information or is it just wrangling data and metadata? When does data turn into information? In his book Managing For The Future Peter Drucker states: … data is not information. Information is data endowed with relevance and purpose. Managing For The Future If we use the correct semantic element to mark up content then we’re developing with purpose and creating relevance. For example, if we follow the advice of the HTML 5.1 specification and mark up headings using heading rank instead of the outline algorithm, we’re creating a structure where the depth of one heading is relevant to the previous one. Architected correctly, an <h2> element should be relevant to its parent, which should be the <h1>. By following the HTML specification we can create a structured, searchable, labeled document that will hopefully be relevant to what our users need to be successful. If you’ve never used a screen reader, you might be wondering how the headings on a page are searchable. Screen readers give users the ability to interact with headings in a couple of ways: by creating a list of headings so users can quickly scan the page for information by using a keyboard command to cycle through one heading at a time If we had a document for Christmas Day TV we might structure it something like this: <h1>Christmas Day TV schedule</h1> <h2>BBC1</h2> <h3>Morning</h3> <h3>Evening</h3> <h2>BBC2</h2> <h3>Morning</h3> <h3>Evening</h3> <h2>ITV</h2> <h3>Morning</h3> <h3>Evening</h3> <h2>Channel 4</h2> <h3>Morning</h3> <h3>Evening</h3> If I use VoiceOver to generate a list of headings, I get this: Once I have that list I can use keyboard commands to filter the list based on the heading level. For example, I can press 2 to hear just the <h2>s: If we hadn’t used headings, of if we’d nested them incorrectly, our users would be frustrated. Putting this together Let’s put this together with an example of a button that, when pressed, toggles the appearance of a panel of links. There are numerous ways we could create a button on a web page, but the best way is to just use a <button>. Every browser understands what a <button> is, how it works, and what keyboard shortcuts should be used with them. The HTML specification for the <button> element says: The <button> element represents a button labeled by its contents. The contents that a <button> can have include the type attribute, any relevant ARIA attributes, and the actual text label that the user sees. This information is more important than the visual design: it doesn’t matter how beautiful or obtuse the design is, if the underlying code is non-semantic and poorly labelled, people are going to struggle to use it. Here are three buttons, each created with the same HTML but with different designs: Regardless of what they look like, because we’ve used semantic HTML instead of a bunch of meaningless <div>s or <span>s, people who use assistive technology are going to benefit. Out of the box, without any extra development effort, a <button> is accessible and usable with a keyboard. We don’t have to write event handlers to listen for people pressing the Enter key or the space bar, which we would have to do if we’d faked a button with non-semantic elements. Our <button> can also be quickly findable: for example, in the same way it’s possible to create a list of headings with a screen reader, I can also create a list of form elements and then quickly jump to the one I want. Now we have our <button>, let’s add the panel we’re toggling the appearance of. Here’s our code: <button aria-controls="panel" aria-expanded="false" class="settings" id="settings" type="button">Settings</button> <div class="panel hidden" id="panel"> <ul aria-labelledby="settings"> <li><a href="…">Account</a></li> <li><a href="…">Privacy</a></li> <li><a href="…">Security</a></li> </ul> </div> There’s quite a bit going on here. We’re using the: aria-controls attribute to architect a connection between the <button> element and the panel whose appearance it controls. When some assistive technology, for example the JAWS screen reader, encounters an element with aria-controls it audibly tells a user about the controlled expanded element and gives them the ability to move focus to it. aria-expanded attribute to denote whether the panel is visible or not. We toggle this value using JavaScript to true when the panel is visible and false when it’s not. This important attribute tells people who use screen readers about the state of the elements they’re interacting with. For example, VoiceOver announces Settings expanded button when the panel is visible and Settings collapsed button when it’s hidden. aria-labelledby attribute to give the list a title of “Settings”. This can benefit some users of assistive technology. For example, screen readers can cycle through all the lists on a page, so being able to title them can improve findability. Being able to hear list Settings three items is, I’d argue, more useful than list three items. By doing this we’re supporting usability and findability. <ul> element to contain our list of links in our panel. Let’s look at the choice of <ul> to contain our settings choices. Firstly, our settings are related items, so they belong in a structure that semantically groups things. This is something that a list can do that other elements or patterns can’t. This pattern, for example, isn’t semantic and has no structure: <div><a href="…">Account</a></div> <div><a href="…">Privacy</a></div> <div><a href="…">Security</a></div> All we have there is three elements next to each other on the screen and in the DOM. That is not robust code that signifies anything. Why are we using an unordered list as opposed to an ordered list or a definition list? A quick look at the HTML specification tells us why: The <ul> element represents a list of items, where the order of the items is not important — that is, where changing the order would not materially change the meaning of the document. The HTML 5.1 specification’s description of the element Will the meaning of our document materially change if we moved the order of our links around? Nope. Therefore, I’d argue, we’ve used the correct element to structure our content. These coding decisions are information architecture I believe that what we’ve done here is pure information architecture. Going back to Dan Klyn’s model, we’ve practiced ontology by looking at the meaning of what we’re intending to communicate: we want to communicate there is an interactive element that toggles the appearance of an element on a page so we’ve used one, a <button>, with those semantics. programmatically we’ve used the type='button' attribute to signify that the button isn’t a menu, reset, or submit element. visually we’ve designed our <button> look like something that can be interacted with and, importantly, we haven’t removed the focus ring. we’ve labelled the <button> with the word “Settings” so that our users will hopefully understand what the button is for. we’ve used an <ul> element to structure and communicate our list of related items. We’ve also practiced taxonomy by developing systems and structures and creating relationships between our elements: by connecting the <button> to the panel using the aria-controls attribute we’ve programmatically created a relationship between two elements. we’ve developed a structure in our elements by labelling our <ul> with the same name as the <button> that controls its appearance. And finally we’ve practiced choreography by creating elements that foster movement and interaction. We’ve anticipated the way users and information want to flow: we’ve used a <button> element that is interactive and accessible out of the box. our aria-controls attribute can help some people who use screen readers move easily from the <button> to the panel it controls. by toggling the value of the aria-expanded attribute we’ve developed a system that tells assistive technology about the status of the relationship between our elements: the panel is visible or the panel is hidden. we’ve made sure our information is more usable and findable no matter how our users want or need to interact with it. Regardless of how someone “sees” our work they’re going to be able to use it because we’ve architected multiple ways to access our information. Information architecture, robust code, and accessibility The United Nations estimates that around 10% of the world’s population has some form of disability which, at the time of writing, is around 740,000,000 people. That’s a lot of people who rely on well-architected semantic code that can be interpreted by whatever assistive technology they may need to use. If everyone involved in the creation of our places made of information practiced information architecture it would make satisfying the WCAG 2.0 POUR principles so much easier. Our digital construction practices directly affect the quality of life of millions of people, and we have a responsibility to make technology available to them. In her book How To Make Sense Of Any Mess, Abby Covert states: If we’re going to be successful in this new world, we need to see information as a workable material and learn to architect it in a way that gets us to our goals. How To Make Sense Of Any Mess I believe that the world will be a better place if we start treating front-end development as information architecture. 2016 Francis Storr francisstorr 2016-12-17T00:00:00+00:00 https://24ways.org/2016/front-end-developers-are-information-architects-too/ code
288 Displaying Icons with Fonts and Data- Attributes Traditionally, bitmap formats such as PNG have been the standard way of delivering iconography on websites. They’re quick and easy, and it also ensures they’re as pixel crisp as possible. Bitmaps have two drawbacks, however: multiple HTTP requests, affecting the page’s loading performance; and a lack of scalability, noticeable when the page is zoomed or viewed on a screen with a high pixel density, such as the iPhone 4 and 4S. The requests problem is normally solved by using CSS sprites, combining the icon set into one (physically) large image file and showing the relevant portion via background-position. While this works well, it can get a bit fiddly to specify all the positions. In particular, scalability is still an issue. A vector-based format such as SVG sounds ideal to solve this, but browser support is still patchy. The rise and adoption of web fonts have given us another alternative. By their very nature, they’re not only scalable, but resolution-independent too. No need to specify higher resolution graphics for high resolution screens! That’s not all though: Browser support: Unlike a lot of new shiny techniques, they have been supported by Internet Explorer since version 4, and, of course, by all modern browsers. We do need several different formats, however! Design on the fly: The font contains the basic graphic, which can then be coloured easily with CSS – changing colours for themes or :hover and :focus styles is done with one line of CSS, rather than requiring a new graphic. You can also use CSS3 properties such as text-shadow to add further effects. Using -webkit-background-clip: text;, it’s possible to use gradient and inset shadow effects, although this creates a bitmap mask which spoils the scalability. Small file size: specially designed icon fonts, such as Drew Wilson’s Pictos font, can be as little as 12Kb for the .woff font. This is because they contain fewer characters than a fully fledged font. You can see Pictos being used in the wild on sites like Garrett Murray’s Maniacal Rage. As with all formats though, it’s not without its disadvantages: Icons can only be rendered in monochrome or with a gradient fill in browsers that are capable of rendering CSS3 gradients. Specific parts of the icon can’t be a different colour. It’s only appropriate when there is an accompanying text to provide meaning. This can be alleviated by wrapping the text label in a tag (I like to use <b> rather than <span>, due to the fact that it’s smaller and isn’ t being used elsewhere) and then hiding it from view with text-indent:-999em. Creating an icon font can be a complex and time-consuming process. While font editors can carry out hinting automatically, the best results are achieved manually. Unless you’re adept at creating your own fonts, you’re restricted to what is available in the font. However, fonts like Pictos will cover the most common needs, and icons are most effective when they’re using familiar conventions. The main complaint about using fonts for icons is that it can mean adding a meaningless character to our markup. The good news is that we can overcome this by using one of two methods – CSS generated content or the data-icon attribute – in combination with the :before and :after pseudo-selectors, to keep our markup minimal and meaningful. Our simple markup looks like this: <a href="/basket" class="icon basket">View Basket</a> Note the multiple class attributes. Next, we’ll import the Pictos font using the @font-face web fonts property in CSS: @font-face { font-family: 'Pictos'; src: url('pictos-web.eot'); src: local('☺'), url('pictos-web.woff') format('woff'), url('pictos-web.ttf') format('truetype'), url('pictos-web.svg#webfontIyfZbseF') format('svg'); } This rather complicated looking set of rules is (at the time of writing) the most bulletproof way of ensuring as many browsers as possible load the font we want. We’ll now use the content property applied to the :before pseudo-class selector to generate our icon. Once again, we’ll use those multiple class attribute values to set common icon styles, then specific styles for .basket. This helps us avoid repeating styles: .icon { font-family: 'Pictos'; font-size: 22px: } .basket:before { content: "$"; } What does the :before pseudo-class do? It generates the dollar character in a browser, even when it’s not present in the markup. Using the generated content approach means our markup stays simple, but we’ll need a new line of CSS, defining what letter to apply to each class attribute for every icon we add. data-icon is a new alternative approach that uses the HTML5 data- attribute in combination with CSS attribute selectors. This new attribute lets us add our own metadata to elements, as long as its prefixed by data- and doesn’t contain any uppercase letters. In this case, we want to use it to provide the letter value for the icon. Look closely at this markup and you’ll see the data-icon attribute. <a href="/basket" class="icon" data-icon="$">View Basket</a> We could add others, in fact as many as we like. <a href="/" class="icon" data-icon="k">Favourites</a> <a href="/" class="icon" data-icon="t">History</a> <a href="/" class="icon" data-icon="@">Location</a> Then, we need just one CSS attribute selector to style all our icons in one go: .icon:before { content: attr(data-icon); /* Insert your fancy colours here */ } By placing our custom attribute data-icon in the selector in this way, we can enable CSS to read the value of that attribute and display it before the element (in this case, the anchor tag). It saves writing a lot of CSS rules. I can imagine that some may not like the extra attribute, but it does keep it out of the actual content – generated or not. This could be used for all manner of tasks, including a media player and large simple illustrations. See the demo for live examples. Go ahead and zoom the page, and the icons will be crisp, with the exception of the examples that use -webkit-background-clip: text as mentioned earlier. Finally, it’s worth pointing out that with both generated content and the data-icon method, the letter will be announced to people using screen readers. For example, with the shopping basket icon above, the reader will say “dollar sign view basket”. As accessibility issues go, it’s not exactly the worst, but could be confusing. You would need to decide whether this method is appropriate for the audience. Despite the disadvantages, icon fonts have huge potential. 2011 Jon Hicks jonhicks 2011-12-12T00:00:00+00:00 https://24ways.org/2011/displaying-icons-with-fonts-and-data-attributes/ code
287 Extracting the Content As we throw away our canvas in approaches and yearn for a content-out process, there remains a pain point: the Content. It is spoken of in the hushed tones usually reserved for Lord Voldemort. The-thing-that-someone-else-is-responsible-for-that-must-not-be-named. Designers and developers have been burned before by not knowing what the Content is, how long it is, what style it is and when the hell it’s actually going to be delivered, in internet eons past. Warily, they ask clients for it. But clients don’t know what to make, or what is good, because no one taught them this in business school. Designers struggle to describe what they need and when, so the conversation gets put off until it’s almost too late, and then everyone is relieved that they can take the cop-out of putting up a blog and maybe some product descriptions from the brochure. The Content in content out. I’m guessing, as a smart, sophisticated, and, may I say, nicely-scented reader of the honourable and venerable tradition of 24 ways, that you sense something better is out there. Bunches of boxes to fill in just don’t cut it any more in a responsive web design world. The first question is, how are you going to design something to ensure users have the easiest access to the best Content, if you haven’t defined at the beginning what that Content is? Of course, it’s more than possible that your clients have done lots of user research before approaching you to start this project, and have a plethora of finely tuned Content for you to design with. Have you finished laughing yet? Alright then. Let’s just assume that, for whatever reason of gross oversight, this hasn’t happened. What next? Bringing up Content for the first time with a client is like discussing contraception when you’re in a new relationship. It might be awkward and either party would probably rather be doing something else, but it needs to be broached before any action happens (that, and it’s disastrous to assume the other party has the matter in hand). If we can’t talk about it, how can we expect people to be doing it right and not making stupid mistakes? That being the case, how do we talk about Content? Let’s start by finding a way to talk about it without blushing and scuffing our shoes. And there’s a reason I’ve been treating Content as a Proper Noun. The first step, and I mean really-first-step-way-back-at-the-beginning-of-the-project-while-you-are-still-scoping-out-what-the-hell-you-might-do-for-each-other-and-it’s-still-all-a-bit-awkward-like-a-first-date, is for you to explain to the client how important it is that you, together, work out what is important to your users as part of the user experience design, so that your users get the best user experience. The trouble is that, in most cases, this would lead to blank stares, possibly followed by a light cough and a query about using Comic Sans because it seems friendly. Let’s start by ensuring your clients understand the task ahead. You see, all the time we talk about the Content we do our clients a big disservice. Content is poorly defined. It looms over a project completion point like an unscalable (in the sense of a dozen stacked Kilimanjaros), seething, massive, singular entity. The Content. Defining the problem. We should really be thinking of the Content as ‘contents’; as many parts that come together to form a mighty experience, like hit 90s kids’ TV show Mighty Morphin Power Rangers*. *For those of you who might have missed the Power Rangers, they were five teenagers with attitude, each given crazy mad individual skillz and a coloured lycra suit from an alien overlord. In return, they had to fight a new monster of the week using their abilities and weaponry in sync (even if the audio was not) and then, finally, in thrilling combination as a Humongous Mechanoid Machine of Awesome. They literally joined their individual selves, accessories and vehicles into a big robot. It was a toy manufacturer’s wet dream. So, why do I say Content is like the Power Rangers? Because Content is not just a humongous mecha. It is a combination of well-crafted pieces of contents that come together to form a well-crafted humongous mecha. Of Content. The Red Power Ranger was always the leader. You can imagine your text contents, found on about pages, product descriptions, blog articles, and so on, as being your Red Power Ranger. Maybe your pictures are your Yellow Power Ranger; video is Blue (not used as much as the others, but really impressive when given a good storyline); maybe Pink is your infographics (it’s wrong to find it sexier than the other equally important Rangers, but you kind of do anyway). And so on. These bits of content – Red Text Ranger, Yellow Picture Ranger and others – often join together on a page, like they are teaming up to fight the bad guy in an action scene, and when they all come together (your standard workaday huge mecha) in a launched site, that’s when Content becomes an entity. While you might have a vision for the whole site, Content rarely works that way. Of course, you keep your eye on the bigger prize, the completion of your mega robot, but to get there you need to assemble your working parts, the cogs and springs of contents that will mesh together to finally create your Humongous Mecha of Content. You create parts and join them to form a whole. (It’s rarely seamless; often we need to adjust as we go, but we can create our Mecha’s blueprint by making sure we have all the requisite parts.) The point here is the order these parts were created. No alien overlord plans a Humongous Mechanoid and then thinks, “Gee, how can I split this into smaller fighting units powered by teenagers in snazzy shiny suits?” No toy manufacturer goes into production of a mega robot, made up of model mecha vehicles with detachable arsenal, without thinking how they will easily fit back together to form the ‘Buy all five now to create the mega robot’ set. No good contents are created as a singular entity and chunked up to be slotted in to place any which way, into the body of a site. Think contents, not the Content. Think of contents as smaller units, or as a plural. The Content is what you have at the end. The contents are what you are creating and they are easy to break down. You are no longer scaling the unscalable. You can draw the map and plot the path, page by page, section by section. The page table is your friend To do this, I use a page table. A page table is a simple table template you can create in the word processor of your choice, that you use to tell you everything about the contents of a page – everything except the contents itself. Here’s a page table I created for an employee’s guide to redundancy in the alpha.gov.uk website: Guide to redundancy for employees Page objective: Provide specific information for employees who are facing redundancy about the process, their options and next steps. Source content: directgov page on Redundancy. Scope: In scope Page title An employee’s guide to redundancy Priority content Message: You have rights as an employee facing redundancy Method: A guide written in plain English, with links to appropriate additional content. A video guide (out of scope). Covers the stages of redundancy and rights for those in trade unions and not in trade unions. Glossary of unfamiliar terms. Call to action: Read full guide, act to explore redundancy actions, benefits or new employment. Assets: link to redundancy calculator. Secondary Related items, or popular additional links. Additional tools, such as search and suggestions. location set v not set states microcopy encouraging location set where location may make a difference to the content – ie, Scotland/Northern Ireland. Tertiary Footer and standard links. Content creation: Content exists but was created within the constraints of the previous CMS. Review, correct and edit where necessary. Maintenance: should be flagged for review upon advice from Department of Work and Pensions, and annually. Technology/Publishing/Policy implications: Should be reviewed once the glossary styles have been decided. No video guide in scope at this time, so languages should be simple and screen reader friendly. Reliance on third parties: None, all content and source exists in house. Outstanding questions: None. Download a copy of this page table This particular page table template owes a lot to Brain Traffic’s version found in Kristina Halvorson’s book Content Strategy for the Web. With smaller clients than, say, the government, I might use something a bit more casual. With clients who like timescales and deadlines, I might turn it into a covering sheet, with signatures and agreements from two departments who have to work together to get the piece done on time. I use page tables, and the process of working through them, to reassure clients that I understand the task they face and that I can help them break it down section by section, page stack to page, down to product descriptions and interaction copy. About 80% of my clients break into relieved smiles. Most clients want to work with you to produce something good, they just don’t understand how, and they want you to show them the mountain path on the map. With page tables, clients can understand that with baby steps they can break down their content requirements and commission content they need in time for the designers to work with it (as opposed to around it). If I was Santa, these clients would be on my nice list for sure. My own special brand of Voldemort-content-evilness comes in how I wield my page tables for the other 20%. Page tables are not always thrilling, I’ll admit. Sometimes they get ignored in favour of other things, yet they are crucial to the continual growth and maintenance of a truly content-led site. For these naughty list clients who, even when given the gift of the page table, continually say “Ooh, yes. Content. Right”, I have a special gift. I have a stack of recycled paper under my desk and a cheap black and white laser printer. And I print a blank page table for every conceivable page I can find on the planned redesign. If I’m feeling extra nice, I hole punch them and put them in a fat binder. There is nothing like saying, “This is all the contents you need to have in hand for launch”, and the satisfying thud the binder makes as it hits the table top, to galvanize even the naughtiest clients to start working with you to create the content you need to really create in a content-out way. 2011 Relly Annett-Baker rellyannettbaker 2011-12-15T00:00:00+00:00 https://24ways.org/2011/extracting-the-content/ content
286 Defending the Perimeter Against Web Widgets On July 14, 1789, citizens of Paris stormed the Bastille, igniting a revolution that toppled the French monarchy. On July 14 of this year, there was a less dramatic (though more tweeted) takedown: The Deck network, which delivers advertising to some of the most popular web design and culture destinations, was down for about thirty minutes. During this period, most partner sites running ads from The Deck could not be viewed as result. A few partners were unaffected (aside from not having an ad to display). Fortunately, Dribbble, was one of them. In this article, I’ll discuss outages like this and how to defend against them. But first, a few qualifiers: The Deck has been rock solid – this is the only downtime we’ve witnessed since joining in June. More importantly, the issues in play are applicable to any web widget you might add to your site to display third-party content. Down and out Your defense is only as good as its weakest link. Web pages are filled with links, some of which threaten the ability of your page to load quickly and correctly. If you want your site to work when external resources fail, you need to identify the weak links on your site. In this article, we’ll talk about web widgets as a point of failure and defensive JavaScript techniques for handling them. Widgets 101 Imagine a widget that prints out a Pun of the Day on your site. A simple technique for both widget provider and consumer is for the provider to expose a URL: http://widgetjonesdiary.com/punoftheday.js which returns a JavaScript file like this: document.write("<h2>The Pun of the Day</h2><p>Where do frogs go for beers after work? Hoppy hour!</p>"); The call to document.write() injects the string passed into the document where it is called. So to display the widget on your page, simply add an external script tag where you want it to appear: <div class="punoftheday"> <script src="http://widgetjonesdiary.com/punoftheday.js"></script> <!-- Content appears here as output of script above --> </div> This approach is incredibly easy for both provider and consumer. But there are implications… document.write()… or wrong? As in the example above, scripts may perform a document.write() to inject HTML. Page rendering halts while a script is processed so any output can be inlined into the document. Therefore, page rendering speed depends on how fast the script returns the data. If an external JavaScript widget hangs, so does the page content that follows. It was this scenario that briefly stalled partner sites of The Deck last summer. The elegant solution To make our web widget more robust, calls to document.write() should be avoided. This can be achieved with a technique called JSONP (AKA JSON with padding). In our example, instead of writing inline with document.write(), a JSONP script passes content to a callback function: publishPun("<h2>Pun of the Day</h2><p>Where do frogs go for beers after work? Hoppy hour!</p>"); Then, it’s up to the widget consumer to implement a callback function responsible for displaying the content. Here’s a simple example where our callback uses jQuery to write the content into a target <div>: <!-- Where widget content should appear --> <div class="punoftheday"></div> … <br /> function publishPun(content) { $(&#8216;.punoftheday&#8217;).html(content); // Writes content display location<br /> }<br /> View Example 1 Even if the widget content appears at the top of the page, our script can be included at the bottom so it’s non-blocking: a slow response leaves page rendering unaffected. It simply invokes the callback which, in turn, writes the widget content to its display destination. The hack But what to do if your provider doesn’t support JSONP? This was our case with The Deck. Returning to our example, I’m reminded of computer scientist David Wheeler’s statement, “All problems in computer science can be solved by another level of indirection… Except for the problem of too many layers of indirection.” In our case, the indirection is to move the widget content into position after writing it to the page. This allows us to place the widget <script> tag at the bottom of the page so rendering won’t be blocked, but still display the widget in the target. The strategy: Load widget content into a hidden <div> at the bottom of the page. Move the loaded content from the hidden <div> to its display location. and the code: <!-- Where widget content should appear --> <div class="punoftheday"></div> … <br /> $(&#8216;.punoftheday&#8217;).append($(&#8216;.loading-dock&#8217;).children(&#8216;:gt(0)&#8217;));<br /> View Example 2 After the external punoftheday.js script has processed, the rendered HTML will look as follows: <div class="loading-dock hidden"> <script src="http://widgetjonesdiary.com/punoftheday.js"></script> <h2>Pun of the Day</h2> <p>Where do frogs go for beers after work? Hoppy hour!</p> </div> The ‘loading-dock’ <div> now includes the widget content, albeit hidden from view (if we’ve styled the ‘hidden’ class with display: none). There’s just one more step: move the content to its display destination. This line of jQuery (from above) does the trick: $('.punoftheday').append($('.loading-dock').children(':gt(0)')); This selects all child elements in the ‘loading-doc’ <div> except the first – the widget <script> tag which generated it – and moves it to the display destination. Worth noting is the :gt(0) jQuery selector extension, which allows us to exclude the first (in a 0-based array) child element – the widget <script> tag – from selection. Since all of this happens at the bottom of the page, just before the </body> tag, no rendering has to wait on the external widget script. The only thing that fails if our widget hangs is… the widget itself. Our weakest link has been strengthened and so has our site. DE-FENSE! 2011 Rich Thornett richthornett 2011-12-06T00:00:00+00:00 https://24ways.org/2011/defending-the-perimeter-against-web-widgets/ process
285 Composing the New Canon: Music, Harmony, Proportion Ohne Musik wäre das Leben ein Irrtum —Friedrich NIETZSCHE, Götzen-Dämmerung, Sprüche und Pfeile 33, 1889 Somehow, music is hardcoded in human beings. It is something we understand and respond to without prior knowledge. Music exercises the emotions and our imaginative reflex, not just our hearing. It behaves so much like our emotions that music can seem to symbolize them, to bear them from one person to another. Not surprisingly, it conjures memories: the word music derives from Greek μουσική (mousike), art of the Muses, whose mythological mother was Mnemosyne, memory. But it can also summon up the blood, console the bereaved, inspire fanaticism, bolster governments and dissenters alike, help us learn, and make web designers dance. And what would Christmas be without music? Music moves us, often in ways we can’t explain. By some kind of alchemy, music frees us from the elaborate nuisance and inadequacy of words. Across the world and throughout recorded history – and no doubt well before that – people have listened and made (and made out to) music. [I]t appears probable that the progenitors of man, either the males or females or both sexes, before acquiring the power of expressing their mutual love in articulate language, endeavoured to charm each other with musical notes and rhythm. —Charles DARWIN, The Descent of Man, and Selection in Relation to Sex, 1871 It’s so integral to humankind, we’ve sent it into space as a totem for who we are. (Who knows? It might be important.) Music is essential, a universal compulsion; as Nietzsche wrote, without music life would be a mistake. Music, design and web design There are some obvious and notable similarities between music and visual design. Both can convey mood and evoke emotion but, even under close scrutiny, how they do that remains to a great extent mysterious. Each has formal qualities or parts that can be abstracted, analysed and discussed, often using the same terminology: composition, harmony, rhythm, repetition, form, theme; even colour, texture and tone. A possible reason for these shared aspects is that both visual design and music are means to connect with people in deep and lasting ways. Furthermore, I believe the connections to be made can complement direct emotional appeal. Certain aesthetic qualities in music work on an unconscious and, it could be argued, universal level. Using musical principles in our designs, then, can help provide the connectedness between content, device and user that we now seek as web designers. Yet, when we talk about music and web design, the conversation is almost always about the music designers listen to while working, a theme finding its apotheosis in Designers.MX. Sometimes, articles in that dreary list format seek inspiration from music industry websites. There’s even a service offering pre-templated web designs for bands, and at least one book surveyed the landscape back in 2006. Occasionally, discussions broaden somewhat into whether and how different kinds of music can inspire and influence the design work we produce. Such enquiries, it seems to me, are beside the point. Do I really design differently when I listen to Bach rather than Bacharach? Will the barely restrained energy of Count Basie’s The Kid from Red Bank mean I choose a lively colour palette, and rural, autumnal shades when inspired by Fleet Foxes? Mahler means a thirteen-column layout? Gillian Welch leads to distressed black and white photography? While reflecting the importance we place in music and how it seems to help us in our work, surveys on musical taste and lists of favourite artists fail to recognize that some of the fundamental aesthetic characteristics of music can be adapted and incorporated into modern web design. Antiphonal geometry Over recent years, web designers have embraced grid systems as powerful tools to help create good-looking and intuitive user experiences. With the advent of responsive design, these grids and their contents must adapt to the different screen sizes and properties of all kinds of user devices. Finding and using grid values that can scale well and retain or enhance their proportions and relationships while making the user experience meaningful in several different contexts is more important than ever. In print, this challenge has always started with the dimensions and proportions of the page. Content can thereby be made to belong inside the page and be bound to it. And music has been used for centuries to further this aim. As Robert Bringhurst says in The Elements of Typographical Style: Indeed, one of the simplest of all systems of page proportions is based on the familiar intervals of the diatonic scale. Pages that embody these basic musical proportions have been in common use in Europe for more than a thousand years. Very well. But while he goes on to list (from the full chromatic scale, rather than just diatonic) the proportions and the musical intervals they’re based on, Bringhurst fails to mention what they’re ratios of or their potential effects. Shame. In his favour, however, he later touches on how proportions in print might be considered to work: The page is a piece of paper. It is also a visible and tangible proportion, silently sounding the thoroughbass of the book. On it lies the textblock, which must answer to the page. The two together – page and textblock – produce an antiphonal geometry. That geometry alone can bond the reader to the book. Or conversely, it can put the reader to sleep, or put the reader’s nerves on edge, or drive the reader away. So what does Bringhurst mean by antiphonal geometry, a phrase that marries the musical to the spatial? By stating that the textblock “must answer to the page”, the implication is that the relationship between the proportions of the page and the shape of the textblock printed on it embodies a spatial (geometrical) call-and-response (antiphony) that can be appealing or not. Boulton’s new canon But, as Mark Boulton has pointed out, on the web “there are no edges. There are no ‘pages’. We’ve made them up.” So, what is to be done? In January 2011 at the New Adventures in Web Design conference, Boulton presented his vision of a new canon of web design, a set of principles to guide us as we design the web. There are three overlapping tenets: design from the content out create connectedness between the different content elements bind the content to the web device Rather than design from the edges in, we need to design layout systems from the content out. To this end, Boulton asserts that grid system design should begin with a constraint, and he suggests we use the size of a fixed content element, such as an advertising unit or image, as a starting point for online grid calculations. Khoi Vinh advocates the same approach in his book, Ordering Disorder: Grid Principles for Web Design. Boulton’s second and third tenets, however, are more complex and overlap significantly with each other. Connecting the different parts of the content and binding the content to the device share many characteristics and solutions: adopting ems and percentages as units of size for layout elements altering text size, line length and line height for different viewport dimensions providing higher resolution images for devices with greater pixel densities fluid layout grids, flexible images and responsive design All can help relate the presentation of the content to its delivery in a certain context. But how do we determine the relationship between one element of a layout and another? How can we avoid making arbitrary decisions about the relative sizes of parts of our designs? What can we use to connect the parts of our design to one another, and how can we bind the presentation of the content to the user’s device? Tim Brown’s application of modular typographic scales hints at an answer. In the very useful tool he created for calculating such scales, Brown includes two musical ratios: the perfect fifth (2:3); and the perfect fourth (3:4). Why? Where do they come from? And what do they mean? Harmonies musical and visual Fundamental to music are rhythm and harmony. As any drummer will tell you, without rhythm there is no music. Even when there’s no regular beat, any tune follows a rhythm, however irregular, simply because a change of note is a point of change in the music. Although rhythm, timing and pacing are all relevant to interaction design, right now it’s harmony we’re interested in. Sometimes harmony is called the vertical aspect of music, and melody the horizontal. But this conceit overlooks the fact that harmony is both vertical and horizontal. A single melodic line, as it is played, implies various sets of harmonies on which it is grounded, whether or not those harmonies are played. So, harmony doesn’t just sit vertically beneath the horizontal melody, but moves horizontally as well, through harmonic progression. To stretch this arrangement pixel-thin, we could argue that in onscreen design melody is the content, and the layout and arrangement of the content is the harmony. We sometimes say a design is harmonious when the interplay of different elements of a design is pleasing or balanced or in proportion, and the content (the melody) is set off or conveyed well by or appropriate to the design. We seem to know instinctively whether a layout is harmonious… In the design of The Great Discontent, the relationships between different elements combine to form a balanced whole. …or not. There’s no harmony in the Department for Education’s website because the different parts of the content don’t feel related to one another. What is it that makes one design harmonious and another dissonant? It’s not just whether things line up, though that’s a start. I believe there are much deeper aesthetic forces at work, forces we can tap into in our onscreen designs. Now, I’m not going start a difficult discussion about aesthetics. For our purposes, we just need to know that it’s the branch of philosophy dealing with the nature of beauty, and the creation and perception of beauty. And among the key components in the perception of beauty are harmony and proportion. These have been part of traditional western aesthetics since Plato (about 2,500 years). One of the ways we appreciate the beauty of music is through the harmonic intervals we hear. A musical interval is a combination of two notes and it describes the distance between the two pitches. For example, the distance between C and the G above it (if we take C as the tonic or root) is called a perfect fifth. Left: C to G, a perfect fifth. Right: C and G, not a perfect fifth. And, to get superficially scientific for a moment, each musical interval can be expressed as a ratio of the wavelength frequencies of the notes; for our perfect fifth, with every two wavelengths of C, there are three of G. And what is a ratio, if not a proportion of one thing to another? So, simple musical harmony (using what’s known as just intonation1) affords us a set of proportions, expressed as ratios. Where better to apply these ideas of harmony and proportion from music in web design, than grid systems? A digression: whither φ? Quite often in our discussions of pure design and aesthetics, we mention the golden ratio and regurgitate the same justifications for its use: roots in antiquity; embodied in classical and Renaissance architecture and art; occurrence in nature; the New Twitter, and so forth (oh, really?). Yet the ratios of musical intervals from just intonation are equally venerable and much more widespread: described by Pythagorus; employed in Palladian architecture, and printing, books and art from the Renaissance onwards; in modern times, film and television dimensions; standard international paper sizes (ISO 216, the A and B series); and, again and again, screen dimensions – chances are that screen you’re probably looking at right now has the proportions 2:3 (iPhone and iPod Touch), 3:4 (iPad and Kindle), 3:5 (many smartphones), 5:8 or 16:9 (many widescreen monitors), all ratios of musical intervals. Back to our theme… Musical interval ratios Let’s take a look at most of the ratios within a couple of octaves and crunch some numbers to generate some percentages and other values that we can use in our designs. First, the intervals and their ratios in just intonation and expressed as ratios of one: Name Interval in C Ratio Ratio (1:x) unison C→C 1:1 1:1 minor second C→D♭ 15:16 1:1.067 major second C→D 8:9 1:1.125 minor third C→E♭ 5:6 1:1.2 major third C→E 4:5 1:1.25 perfect fourth C→F 3:4 1:1.333 augmented fourth or diminished fifth C→F♯/G♭ 1:√2 1:1.414 perfect fifth C→G 2:3 1:1.5 minor sixth C→A♭ 5:8 1:1.6 major sixth C→A 3:5 1:1.667 minor seventh C→B♭ 9:16 1:1.778 major seventh C→B 8:15 1:1.875 octave C→C↑ 1:2 1:2 major tenth C→E↑ 2:5 1:2.5 major eleventh C→F↑ 3:8 1:2.667 major twelfth C→G↑ 1:3 1:3 double octave C→C↑ 1:4 1:4 Name Interval in C Ratio Ratio (1:x) And now as percentages, of both the larger and smaller values in the ratios: Name Ratio % of larger value % of smaller value unison 1:1 100% 100% minor second 15:16 93.75% 106.667% major second 8:9 88.889% 112.5% minor third 5:6 83.333% 120% major third 4:5 80% 125% perfect fourth 3:4 75% 133.333% augmented fourth or diminished fifth 1:√2 70.711% 141.421% perfect fifth 2:3 66.667% 150% minor sixth 5:8 62.5% 160% major sixth 3:5 60% 166.667% minor seventh 9:16 56.25% 177.778% major seventh 8:15 53.333% 187.5% octave 1:2 50% 200% major tenth 2:5 40% 250% major eleventh 3:8 37.5% 266.667% major twelfth 1:3 33.333% 300% double octave 1:4 25% 400% Name Ratio % of larger value % of smaller value As you can see, the simple musical intervals are expressed as ratios of small whole numbers (integers). We can then normalize them as ratios of one, as well as derive percentage values, both in terms of the smaller value to the larger, and vice versa. These are the numbers we can incorporate into our designs. If you’ve ever written something like body { font: 100%/1.5 "Museo Sans", Helvetica, sans-serif; } in your CSS, you’re already using a musical ratio: the perfect fifth. Modular scales allow us to generate a set of numbers based on a musical interval that can be used for all kinds of typographic and layout decisions to create harmony in a visual design for the web. As Tim Brown said at the 2010 Build conference: I think that from that most atomic unit – type – whole experiences can resonate, whole experiences can be harmonious. And whole experiences can have a purpose suited to our design intentions. Once more, with feeling: connectedness As well as modular scales, there are other methods of incorporating musical interval ratios into our work. Setting the ratio of font size to line height in CSS is one such example. We could also create a typographic hierarchy using the same principle and combining several ratios that we know harmonize well musically in a chord: body { font-size: 75%; } /* =12px = base size or tonic */ h1 { font-size: 32px; font-size: 2.667rem; } /* =32px = 3:8 = major eleventh (C→F↑) */ h2 { font-size: 24px; font-size: 2rem; } /* =24px = 1:2 = octave (C→C↑) */ h3 { font-size: 20px; font-size: 1.667rem; } /* =20px = 3:5 = major sixth (C→A) */ figcaption, small { font-size: 9px; font-size : 0.75rem } /* =9px = 3:4 = perfect fourth (C→F) */ Whoa! Hold your reindeer, Santa! How can we know what interval combinations work well together to form chords? Well, I’m a classically trained musician, so perhaps I have an advantage. To avoid a long, technically complex digression into musical harmony, here are a few basic combinations of intervals that are harmonious in one way or another: unison; major third; perfect fifth; octave unison; perfect fourth; major sixth; octave unison; minor third; minor sixth; octave unison; minor third; diminished fifth; major sixth; octave This isn’t to say that other combinations can’t be used to interesting effect and particular purpose – they surely can – but I have to make sure there’s something left for you to experiment with in the wee small hours over the holiday. Bear in mind, though, were I to play you two notes from the same scale to form a minor second, for example, you’d probably say it was dissonant and maybe that quality of the 15:16 ratio would be translated to the design. In the typographic hierarchy above, you’ll notice I used an interval in the higher octave, which affords a broader range of ratios while retaining the harmony. Thus, a perfect fifth (2:3) becomes a major twelfth (1:3), or a major sixth (3:5) becomes a major thirteenth (3:10). The harmonic ratios can obviously be used as proportions for layout as well, in several different ways: image width and height (for example, 450×800px = 9:16 = minor seventh) main content to page width (67%:100% = 2:3 = perfect fifth) page width to viewport width (80%:100% = 4:5 = major third) One great benefit of using such ratios in web design work is that they can be applied in responsive web design. Proportional values, based on percentages or equivalent em units, will scale with changing viewports, so your layout and image proportions can be maintained or deliberately changed, as we’re about to find out, across devices. Small speakers, tall speakers: binding to the device The musical interval ratios also provide an opportunity, not only to create connectedness between the parts of a layout, but to bind the content to a device – that elusive antiphonal geometry. Just as a textblock and page resonate together, so too can web content and the screen. Earlier, I mentioned that several common screen aspect ratios match musical interval ratios. It would seem, then, that we have a set of proportions that we can use in different ways to establish and retain a sense of harmony that can be based on and change with those contexts. Using musical interval ratios, we can bind the display of our content to the device it’s displayed on. If you haven’t met already, let me introduce you to the device-aspect-ratio property of CSS media queries. @media only screen and (device-aspect-ratio: 3/4) { } @media only screen and (device-aspect-ratio: 480/640) { } @media only screen and (device-aspect-ratio: 600/800) { } @media only screen and (device-aspect-ratio: 768/1024) { } Regardless of the precise pixel values, each of these media queries would apply to devices whose display area has an aspect ratio of 3:4. It works by comparing the device-width with the device-height. (It’s not to be confused with aspect-ratio, which is defined by the width and height of the browser within the device.) The values in the media query are always presented as width/height, with portrait being the default orientation for smartphones and tablets; that is, to match an iPhone screen, you’d use device-aspect-ratio: 2/3, not 3/2, which won’t work. Here’s a table of the musical intervals with their corresponding screens. Name device-aspect-ratio Screens Common resolutions (pixels) major third 5/4 TFT LCD computer screens 1,280×1,024 perfect fourth 3/4 or 4/3 iPad, Kindle and other tablets, PDAs 320×240, 768×1,024 perfect fifth 2/3 iPhone, iPod Touch 320×480, 640×960 minor sixth 8/5 (16/10) Many widescreens 1,152×720, 1,440×900, 1,920×1,200 major sixth 3/5 Many smartphones 240×400, 480×800 minor seventh 16/9 or 9/16 Many widescreens and some smartphones 720×1,280, 1,366×768, 1,920×1,080, 2,560×1,440 [You might argue that I’m playing fast and loose with the ratios. I suppose, mathematically speaking, 9:16 is not the same as 16:9: I’m no expert. But let’s not throw the baby out with the bath water, particularly at Christmas.] With this in mind, we can begin to write media queries that will influence various typographic and layout values in line with the aspect ratios of specific screens and in combination with em-based min-width queries that work from smaller, mobile screens to larger, desktop screens. Here’s a very simple demo page with only some text, an image with a caption and a little basic layout: no seasonal overindulgence here. Demo: Sample page using device-aspect-ratio media queries based on musical interval ratios Our initial styles for all devices are based on the perfect fifth, with the major third and octave rounding things out into a harmonious whole, whether or not media queries are supported. For example: html { font-size: 100%; line-height: 1.5; } /* font-size:line-height = 16:24 = 2:3 = perfect fifth */ h1 { font-size: 32px; font-size: 2rem; line-height: 1.25; } /* font-size:line-height = 32:40 = 4:5 = major third body:h1 = 16:32 = 1:2 = octave */ While we should really consider methods of delivering images appropriate to the screen size, let’s just stick to a single image for all devices. But why don’t we change its aspect ratio from 4:3 to 3:2, to fit with our harmonic scheme? It’s easy enough to do with overflow:hidden on the <figure> element to hide a part of the image, and a negative margin fudge: figure img { margin: -8.5% 0 0 0; width: 100%; max-width: 100%; } Our first break point targets devices 320 pixels wide with an aspect ratio of 2:3, namely the iPhone and iPod Touch: /* 320px = 20×16 */ @media only screen and (min-width: 20em) and (device-aspect-ratio: 2/3) { } We’re actually already there, of course, as the intervals we’ve chosen resonate with this aspect ratio – the content is already bound to the device. Our next media query, then, will make some changes to match a different ratio, the major sixth (3:5), which is same as that of many smartphones: /* 480px = 30×16 */ @media only screen and (min-width: 30em) and (device-aspect-ratio: 3/5) { } A different aspect ratio might require a change in harmony. For devices with these proportions, we’ll now use the perfect fourth (3:4) and the major sixth (3:5) along with the octave we already have to create a new resonating harmony. For instance, a slightly wider screen means we can increase the line-height to aid the legibility of longer lines: html { line-height: 1.667; } /* font-size:line-height = 16:26.672 = 3:5 = major sixth */ h1 { font-size: 32px; font-size: 2rem; line-height: 1.667; } /* font-size:line-height = 32:53.333 = 3:5 = major sixth body:h1 = 16:32 = 1:2 = octave */ and we can remove the negative margin to display our 4:3 image in its entirety. Each screen displays content styled using relationships related to its own proportions. On the left, an iPhone 4 (2:3); on the right, a Samsung Nexus S (3:5). Your mileage may vary. Another device, another media query. At 768 pixels, screens are wide enough to add columns. The ratios we’ve used for the 3:5 screens include the perfect fourth (3:4) so we don’t need to change any of the font measurements, but we can base the proportions of the columns on the major sixth interval: article { float: left; width: 56%; } /* width of main column 3:5 (60% of 100%, major sixth) incorporating gutter width */ aside { float : right; width : 36%; } On devices with a 3:4 aspect ratio, this works even better in landscape orientation. While not every screen over 768 pixels wide will have 3:4 proportions, the range of intervals informing the design ensure harmonious relationships between the different parts of the layout. For wide screens proper (break point at 1,280 pixels) we can employ a new set of harmonious intervals. Many laptop and desktop screens have a 16:10 aspect ratio, which boils down to 8:5, equivalent to the minor sixth (5:8). Combined with a minor third (5:6) and the octave (1:2), this creates a new harmony appropriate to these devices. Let’s increase the font size and change the image’s aspect ratio to match: html { font-size: 120%; line-height: 1.6; } /* font-size increased for wider screens from 16px to 19.2px (5:6 = minor third) font-size:line-height = 19.2:30.72 = 5:8 = minor sixth */ figure img { margin: -12.5% 0 0 ; } /* using -ve margin combined with overflow:hidden on the figure element to crop the image from 4:3 to 8:5 = minor sixth */ A wide screen with a 8:5 (16:10) aspect ratio and an image to match. With more pixels at our disposal, we can also now use the musical interval ratios to determine the width of the layout, and change the column proportions as well: section { margin: 0 auto; width: 83.333%; } /* content width:screen width = 5:6 = minor third */ article { width: 60%; } /* width of main column 5:8 (62.5% of 100%, minor sixth) incorporating gutter width */ aside { width: 35%; } With some carefully targeted media queries, we can begin to reach towards fulfilling the second and third tenets of Boulton’s new canon for web design: connecting the parts of content through relationships embodied in the layout design; and binding the content to the devices people use to access it. Coda Musical interval ratios and screen aspect ratios reveal more than convenient correspondence. These proportions work on a deep aesthetic level. Much is claimed for the golden ratio φ, but none of the screens pervading our lives use it. Perhaps that’s an accident of technology, but can making screens to φ’s proportions be more difficult or expensive than 2:3 or 3:4 or 16:10? Here, then, is not just one but a set of proportions with a uniquely human focus, originating in nature, recognized in antiquity, fundamental still. We find music to be an art steeped with meaning, yet, unlike literary and representational arts, purely instrumental music has no obvious semantic content. It boasts an ability to express emotions while remaining an abstract art in some sense, which makes it very like design. These days, we’re rightly encouraged to design for emotion, to make our users’ experience meaningful, seductive, delightful. Using musical ideas and principles in our designs can help achieve those ends. Let’s not be naïve, of course; designing web pages is even less like composing music than it’s like designing for print. In visual design, the eye will always be sovereign to the ear; following these principles will only get us so far. We cannot truly claim that a carefully composed web page layout will have the same qualities and effect as any musical patterns that inform it. In music, a set of intervals is always harmonious in relation to other sets of intervals: music rarely stands still. What aspect ratios will future screens take? Already today there is great variation in devices and support for media queries (and within that, support for device-aspect-ratio). And what of non-western musical traditions? Or rhythm, form, tempo and dynamics? What I’ve demonstrated above is only a suggestion, a tentative exploration of one possible way forward. But as our discipline matures and we become more articulate about what we do, we must look longer and deeper into areas of human endeavour already rich with value. Music is a fertile ground to explore and has the potential to yield up new approaches for web design. Footnotes Just intonation is a system of tuning that uses small integers to describe the musical intervals, based initially on the perfect fifth, that most consonant of intervals. Simple instruments such as vibrating strings and natural horns, as well as unaccompanied voices, tend to fall into just intonation naturally. 2011 Owen Gregory owengregory 2011-12-09T00:00:00+00:00 https://24ways.org/2011/composing-the-new-canon/ design
284 Subliminal User Experience The term ‘user experience’ is often used vaguely to quantify common elements of the interaction design process: wireframing, sitemapping and so on. UX undoubtedly involves all of these principles to some degree, but there really is a lot more to it than that. Good UX is characterized by providing the user with constant feedback as they step through your interface. It means thinking about and providing fallbacks and error resolutions in even the rarest of scenarios. It’s about omitting clutter to make way for the necessary, and using the most fundamental of design tools to influence a user’s path. It means making no assumptions, designing right down to the most distinct details and going one step further every single time. In many cases, good UX is completely subliminal. There are simple tools and subtleties we can build into our products to enhance the overall experience but, in order to do so, we really have to step beyond where we usually draw the line on what to design. The purpose of this article is not to provide technical how-tos, as the functionality is, in most cases, quite simple and could be implemented in a myriad of ways. Rather, it will present a handful of ideas for enhancing the experience of an interface at a deeper level of design without relying on the container. We’ll cover three elements that should get you thinking in the right mindset: progress activity and post-active states pseudo-class preloading buttons and their (mis)behaviour Progress activity and the post-active state We’ve long established that we can’t control the devices our products are viewed on, which browser they’ll run in or what connection speed will be used to access them. We accept this all as factual, so why is it so often left to the browser to provide feedback to the user when an event is triggered or an error encountered? The browser isn’t part of the interface — it’s merely a container. A simple, visual recognition of your users’ activity may be all it takes to make or break the product. Let’s begin with a commonly overlooked case: progress activity. A user moves their cursor over a hyperlink or button, which is clearly defined as one by the visual language of your content. Upon doing so, they trigger the :hover state to confirm this element is indeed interactive. So far, so good. What happens next is where it starts to fall apart: the user hits this link, presumably triggering an :active state, which is then returned to the normal state upon release. And then what? From this point on, your user is in limbo. The link has fallen back to either its regular or :visited state. You’ve effectively abandoned them and are relying entirely on the browser they’re using to communicate that something is happening. This poses quite a few problems: The user may lose focus of what they were doing. There is little consistency between progress indication in browsers. The user may not even notice that their action has been acknowledged. How many times have one or more of these events happened to you due to a lack of communication from the interface? Think about the differences between Safari and Chrome in this area — two browsers that, when compared to each other, are relatively similar in nature, though this basic feature differs in execution. Like all aspects of designing the user experience, there is no one true way to fix this problem, but we can introduce details that many users will unconsciously appreciate. Consider the basic loading indicator. It’s nothing new — in fact, some would argue it’s quite a cliché. However, whether using a spinning wheel or a progress bar, a gif or JavaScript, or something more sophisticated, these simple tools create an illusion of movement, progress and activity. Depending on the implementation, progress indication graphics can significantly increase a user’s perception of the speed in which an event is taking place. Combine this with a cursor change and a lock over the element to prevent double-clicking or reloading, and your chances of keeping your user’s valuable attention have significantly increased. Demo: Progress activity and the post-active state This same logic applies to all aspects of defaulting in a browser, from micro-elements like this up to something as simple as a 404 page. The difference in a user’s reaction to hitting the default Apache 404 and a hand-crafted, branded page are phenomenal and there are no prizes for guessing which one they’re more likely to exit from. Pseudo-class preloading Another detail that it pays well to look after is the use and abuse of the :hover element and, more importantly, the content revealed by it. Chances are you’re using the :hover pseudo-class somewhere in almost every screen you create. If content is being revealed on :hover and that content takes some time to load, there will inevitably be a delay the first time it is initiated. It appears tacky and half-finished when a tooltip or drop-down loads instantly, only to have its background or supporting elements follow through a second or two later. So, let’s preload the elements we know we’ll need. A very simple application of this would be to load each file into the default state of a visible element and offset them by a large number. This ensures our elements have loaded and are ready if and when they need to be displayed. element { background: url(path/to/image.jpg) -9999em -9999em no-repeat; } element .tooltip { display: none; } element:hover .tooltip { display: block; background: url(path/to/image.jpg) 0 0; } Background images are just one example. Of course, the same logic can apply to any form of revealed content. Using a sprite graphic can also be a clever — albeit tedious — method for achieving the same goal, so if you’re using a sprite, preloading in this way may not be necessary The differences between preloading and not can only be visualized properly with an actual demonstration. Demo: Preloading revealed content Buttons and their (mis)behaviour Almost all of the time, a button serves just one purpose: to be clicked (or tapped). When a button’s pressed, therefore, if anything other than triggering the desired event occurs, a user naturally becomes frustrated. I often get funny looks when talking about this, but designing the details of a button is something I consider essential. It goes without saying that a button should always visually recognise :hover and :active states. We can take that one step further and disable some actions that get in the way of pressing the button. It’s rare that a user would ever want to select and use the text on a button, so let’s cleanly disable it: element { -moz-user-select: -moz-none; -webkit-user-select: none; user-select: none; } If the button is image-based or contains an image, we could also disable user dragging to make sure the image element stays locked to the button: element { -moz-user-drag: -moz-none; -webkit-user-drag: none; user-drag: none; } Demo: A more usable button Disabling global features like this should be done with utmost caution as it’s very easy to cross the line between enhancement and friction. Cases where this is acceptable are very rare, but it’s a good trick to keep in mind nevertheless. Both Apple’s iCloud and Metalab’s Flow applications use these tools appropriately and to great extent. You could argue that the visual feedback of having the text selected or image dragged when a user mis-hits the button is actually a positive effect, informing the user that their desired action did not work. However, covering for human error should be a designer’s job, not that of our users. We can (almost) ensure it does work for them by accommodating for errors like this in most cases. Final thoughts Designing to this level of detail can seem obsessive, but as a designer and user of many interfaces and applications, I believe it can be the difference between a good user experience and a great one. The samples you’ve just seen are only a fraction of the detail we can design for. Keep in mind that the demonstrations, code and methods above outline just one way to do this. You may not agree with all of these processes or have the time and desire to consider them, but one fact remains: it’s not the technology, or the way it’s done that’s important — it’s the logic and the concept of designing everything. 2011 Chris Sealey chrissealey 2011-12-03T00:00:00+00:00 https://24ways.org/2011/subliminal-user-experience/ ux
283 CSS3 Patterns, Explained Many of you have probably seen my CSS3 patterns gallery. It became very popular throughout the year and it showed many web developers how powerful CSS3 gradients really are. But how many really understand how these patterns are created? The biggest benefit of CSS-generated backgrounds is that they can be modified directly within the style sheet. This benefit is void if we are just copying and pasting CSS code we don’t understand. We may as well use a data URI instead. Important note In all the examples that follow, I’ll be using gradients without a vendor prefix, for readability and brevity. However, you should keep in mind that in reality you need to use all the vendor prefixes (-moz-, -ms-, -o-, -webkit-) as no browser currently implements them without a prefix. Alternatively, you could use -prefix-free and have the current vendor prefix prepended at runtime, only when needed. The syntax described here is the one that browsers currently implement. The specification has since changed, but no browser implements the changes yet. If you are interested in what is coming, I suggest you take a look at the dev version of the spec. If you are not yet familiar with CSS gradients, you can read these excellent tutorials by John Allsopp and return here later, as in the rest of the article I assume you already know the CSS gradient basics: CSS3 Linear Gradients CSS3 Radial Gradients The main idea I’m sure most of you can imagine the background this code generates: background: linear-gradient(left, white 20%, #8b0 80%); It’s a simple gradient from one color to another that looks like this: See this example live As you probably know, in this case the first 20% of the container’s width is solid white and the last 20% is solid green. The other 60% is a smooth gradient between these colors. Let’s try moving these color stops closer to each other: background: linear-gradient(left, white 30%, #8b0 70%); See this example live background: linear-gradient(left, white 40%, #8b0 60%); See this example live background: linear-gradient(left, white 50%, #8b0 50%); See this example live Notice how the gradient keeps shrinking and the solid color areas expanding, until there is no gradient any more in the last example. We can even adjust the position of these two color stops to control where each color abruptly changes into another: background: linear-gradient(left, white 30%, #8b0 30%); See this example live background: linear-gradient(left, white 90%, #8b0 90%); See this example live What you need to take away from these examples is that when two color stops are at the same position, there is no gradient, only solid colors. Even without going any further, this trick is useful for a number of different use cases like faux columns or the effect I wanted to achieve in my homepage or the -prefix-free page where the background is only shown on one side and hidden on the other: Combining with background-size We can do wonders, however, if we combine this with the CSS3 background-size property: background: linear-gradient(left, white 50%, #8b0 50%); background-size: 100px 100px; See this example live And there it is. We just created the simplest of patterns: (vertical) stripes. We can remove the first parameter (left) or replace it with top and we’ll get horizontal stripes. However, let’s face it: Horizontal and vertical stripes are kinda boring. Most stripey backgrounds we see on the web are diagonal. So, let’s try doing that. Our first attempt would be to change the angle of the gradient to something like 45deg. However, this results in an ugly pattern like this: See this example live Before reading on, think for a second: why didn’t this produce the desired result? Can you figure it out? The reason is that the gradient angle rotates the gradient inside each tile, not the tiled background as a whole. However, didn’t we have the same problem the first time we tried to create diagonal stripes with an image? And then we learned that every stripe has to be included twice, like so: So, let’s try to create that effect with CSS gradients. It’s essentially what we tried before, but with more color stops: background: linear-gradient(45deg, white 25%, #8b0 25%, #8b0 50%, white 50%, white 75%, #8b0 75%); background-size:100px 100px; See this example live And there we have our stripes! An easy way to remember the order of the percentages and colors it is that you always have two of the same in succession, except the first and last color. Note: Firefox for Mac also needs an additional 100% color stop at the end of any pattern with more than two stops, like so: ..., white 75%, #8b0 75%, #8b0). The bug was reported in February 2011 and you can vote for it and track its progress at Bugzilla. Unfortunately, this is essentially a hack and we will realize that if we try to change the gradient angle to 60deg: See this example live Not that maintainable after all, eh? Luckily, CSS3 offers us another way of declaring such backgrounds, which not only helps this case but also results in much more concise code: background: repeating-linear-gradient(60deg, white, white 35px, #8b0 35px, #8b0 70px); See this example live In this case, however, the size has to be declared in the color stop positions and not through background-size, since the gradient is supposed to cover the entire container. You might notice that the declared size is different from the one specified the previous way. This is because the size of the stripes is measured differently: in the first example we specify the dimensions of the tile itself; in the second, the width of the stripes (35px), which is measured diagonally. Multiple backgrounds Using only one gradient you can create stripes and that’s about it. There are a few more patterns you can create with just one gradient (linear or radial) but they are more or less boring and ugly. Almost every pattern in my gallery contains a number of different backgrounds. For example, let’s create a polka dot pattern: background: radial-gradient(circle, white 10%, transparent 10%), radial-gradient(circle, white 10%, black 10%) 50px 50px; background-size:100px 100px; See this example live Notice that the two gradients are almost the same image, but positioned differently to create the polka dot effect. The only difference between them is that the first (topmost) gradient has transparent instead of black. If it didn’t have transparent regions, it would effectively be the same as having a single gradient, as the topmost gradient would obscure everything beneath it. There is an issue with this background. Can you spot it? This background will be fine for browsers that support CSS gradients but, for browsers that don’t, it will be transparent as the whole declaration is ignored. We have two ways to provide a fallback, each for different use cases. We have to either declare another background before the gradient, like so: background: black; background: radial-gradient(circle, white 10%, transparent 10%), radial-gradient(circle, white 10%, black 10%) 50px 50px; background-size:100px 100px; or declare each background property separately: background-color: black; background-image: radial-gradient(circle, white 10%, transparent 10%), radial-gradient(circle, white 10%, transparent 10%); background-size:100px 100px; background-position: 0 0, 50px 50px; The vigilant among you will have noticed another change we made to our code in the last example: we altered the second gradient to have transparent regions as well. This way background-color serves a dual purpose: it sets both the fallback color and the background color of the polka dot pattern, so that we can change it with just one edit. Always strive to make code that can be modified with the least number of edits. You might think that it will never be changed in that way but, almost always, given enough time, you’ll be proved wrong. We can apply the exact same technique with linear gradients, in order to create checkerboard patterns out of right triangles: background-color: white; background-image: linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%), linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%); background-size:100px 100px; background-position: 0 0, 50px 50px; See this example live Using the right units Don’t use pixels for the sizes without any thought. In some cases, ems make much more sense. For example, when you want to make a lined paper background, you want the lines to actually follow the text. If you use pixels, you have to change the size every time you change font-size. If you set the background-size in ems, it will naturally follow the text and you will only have to update it if you change line-height. Is it possible? The shapes that can be achieved with only one gradient are: stripes right triangles circles and ellipses semicircles and other shapes formed from slicing ellipses horizontally or vertically You can combine several of them to create squares and rectangles (two right triangles put together), rhombi and other parallelograms (four right triangles), curves formed from parts of ellipses, and other shapes. Just because you can doesn’t mean you should Technically, anything can be crafted with these techniques. However, not every pattern is suitable for it. The main advantages of this technique are: no extra HTTP requests short code human-readable code (unlike data URIs) that can be changed without even leaving the CSS file. Complex patterns that require a large number of gradients are probably better left to SVG or bitmap images, since they negate almost every advantage of this technique: they are not shorter they are not really comprehensible – changing them requires much more effort than using an image editor They still save an HTTP request, but so does a data URI. I have included some very complex patterns in my gallery, because even though I think they shouldn’t be used in production (except under very exceptional conditions), understanding how they work and coding them helps somebody understand the technology in much more depth. Another rule of thumb is that if your pattern needs shapes to obscure parts of other shapes, like in the star pattern or the yin yang pattern, then you probably shouldn’t use it. In these patterns, changing the background color requires you to also change the color of these shapes, making edits very tedious. If a certain pattern is not practicable with a reasonable amount of CSS, that doesn’t mean you should resort to bitmap images. SVG is a very good alternative and is supported by all modern browsers. Browser support CSS gradients are supported by Firefox 3.6+, Chrome 10+, Safari 5.1+ and Opera 11.60+ (linear gradients since Opera 11.10). Support is also coming in Internet Explorer when IE10 is released. You can get gradients in older WebKit versions (including most mobile browsers) by using the proprietary -webkit-gradient(), if you really need them. Epilogue I hope you find these techniques useful for your own designs. If you come up with a pattern that’s very different from the ones already included, especially if it demonstrates a cool new technique, feel free to send a pull request to the github repo of the patterns gallery. Also, I’m always fascinated to see my techniques put in practice, so if you made something cool and used CSS patterns, I’d love to know about it! Happy holidays! 2011 Lea Verou leaverou 2011-12-16T00:00:00+00:00 https://24ways.org/2011/css3-patterns-explained/ code
282 Front-end Style Guides We all know that feeling: some time after we launch a site, new designers and developers come in and make adjustments. They add styles that don’t fit with the content, use typefaces that make us cringe, or chuck in bloated code. But if we didn’t leave behind any documentation, we can’t really blame them for messing up our hard work. To counter this problem, graphic designers are often commissioned to produce style guides as part of a rebranding project. A style guide provides details such as how much white space should surround a logo, which typefaces and colours a brand uses, along with when and where it is appropriate to use them. Design guidelines Some design guidelines focus on visual branding and identity. The UK National Health Service (NHS) refer to theirs as “brand guidelines”. They help any designer create something such as a trustworthy leaflet for an NHS doctor’s surgery. Similarly, Transport for London’s “design standards” ensure the correct logos and typefaces are used in communications, and that they comply with the Disability Discrimination Act. Some guidelines go further, encompassing a whole experience, from the visual branding to the messaging, and the icon sets used. The BBC calls its guidelines a “Global Experience Language” or GEL. It’s essential for maintaining coherence across multiple sites under the same BBC brand. The BBC’s Global Experience Language. Design guidelines may be brief and loose to promote creativity, like Mozilla’s “brand toolkit”, or be precise and run to many pages to encourage greater conformity, such as Apple’s “Human Interface Guidelines”. Whatever name or form they’re given, documenting reusable styles is invaluable when maintaining a brand identity over time, particularly when more than one person (who may not be a designer) is producing material. Code standards documents We can make a similar argument for code. For example, in open source projects, where hundreds of developers are submitting code, it makes sense to set some standards. Drupal and Wordpress have written standards that make editing code less confusing for users, and more maintainable for contributors. Each community has nuances: Drupal requests that developers indent with two spaces, while Wordpress stipulates a tab. Whatever the rules, good code standards documents also explain why they make their recommendations. The front-end developer’s style guide Design style guides and code standards documents have been a successful way of ensuring brand and code consistency, but in between the code and the design examples, web-based style guides are emerging. These are maintained by front-end developers, and are more dynamic than visual design guidelines, documenting every component and its code on the site in one place. Here are a few examples I’ve seen in the wild: Natalie Downe’s pattern portfolio Natalie created the pattern portfolio system while working at Clearleft. The phrase describes a single HTML page containing all the site’s components and styles that can act as a deliverable. Pattern portfolio by Natalie Downe for St Paul’s School, kept up to date when new components are added. The entire page is about four times the length shown. Each different item within a pattern portfolio is a building block or module. The components are decoupled from the layout, and linearized so they can slot into anywhere on a page. The pattern portfolio expresses every component and layout structure in the smallest number of documents. It sets out how the markup and CSS should be, and is used to illustrate the project’s shared vocabulary. Natalie Downe By developing a system, rather than individual pages, the result is flexible when the client wants to add more pages later on. Paul Lloyd’s style guide Paul Lloyd has written an extremely comprehensive style guide for his site. Not only does it feature every plausible element, but it also explains in detail when it’s appropriate to use each one. Paul’s style guide is also great educational material for people learning to write code. Oli Studholme’s style guide Even though Oli’s style guide is specific to his site, he’s written it as though it’s for someone else. It’s exhaustive and gives justifications for all his decisions. In some places, he links to browser bug tickets and makes recommendations for cross-browser compatibility. Oli has released his style guide under a Creative Commons Attribution Share-alike license, and encourages others to create their own versions. Jeremy Keith’s pattern primer Front-end style guides may have comments written in the code, annotations that appear on the page, or they may list components alongside their code, like Jeremy’s pattern primer. You can watch or fork Jeremy’s pattern primer on Github. Linearizing components like this resembles a kind of mobile first approach to development, which Jeremy talks about on the 5by5 podcast: The Web Ahead 3. The benefits of maintaining a front-end style guide If you still need convincing that producing documentation like this for every project is worth the effort, here are a few nice side-effects to working this way: Easier to test A unified style guide makes it easier to spot where your design breaks. It’s simple to check how components adapt to different screen widths, test for browser bugs and develop print style sheets when everything is on the same page. When I worked with Natalie, she’d resize the browser window and bump the text size up and down during development to see if anything would break. Better workflow The approach also forces you to think how something works in relation to the whole site, rather than just a specific page, making it easier to add more pages later on. Starting development by creating a style guide makes a lot more sense than developing on a page-by-page basis. Shared vocabulary Natalie’s pattern portfolio in particular creates a shared vocabulary of names for components (teaser, global navigation, carousel…), so a team can refer to different regions of the site and have a shared understanding of its meaning. Useful reference A combined style guide also helps designers and writers to see the elements that will be incorporated in the site and, therefore, which need to be designed or populated. A boilerplate list of components for every project can act as a reminder of things that may get missed in the design, such as button states or error messages. Creating your front-end style guide As you’ve seen, there are plenty of variations on the web style guide. Which method is best depends on your project and workflow. Let’s say you want to show your content team how blockquotes and asides look, when it’s appropriate to use them, and how to create them within the CMS. In this case, a combination of Jeremy’s pattern primer and Paul’s descriptive style guide – with the styled component alongside a code snippet and a description of when to use it – may be ideal. Start work on your style guide as soon as you can, preferably during the design stage: Simply presenting flat image comps is by no means enough - it’s only the start… As layouts become more adaptable, flexible and context-specific, so individual components will become the focus of our design. It is therefore essential to get the foundational aspects of our designs right, and style guides allow us to do that. Paul Lloyd on Style guides for the Web Print out the designs and label the unique elements and components you’ll need to add to your style guide. Make a note of the purpose of each component. While you’re doing this, identify the main colours used for things like links, headings and buttons. I draw over the print-outs on to tracing paper so I can make more annotations. Here, I’ve started annotating the widths from the designer’s mockup so I can translate these into percentages. Start developing your style guide with base styles that target core elements: headings, links, tables, blockquotes, ordered lists, unordered lists and forms. For these elements, you could maintain a standard document to reuse for every project. Next, add the components that override the base styles, like search boxes, breadcrumbs, feedback messages and blog comments. Include interaction styles, such as hover, focus and visited state on links, and hover, focus and active states on buttons. Now start adding layout and begin slotting the components into place. You may want to present each layout as a separate document, or you could have them all on the same page stacked beneath one another. Document code practices Code can look messy when people use different conventions, so note down a standard approach alongside your style guide. For example, Paul Stanton has documented how he writes CSS. The gift wrapping Presenting this documentation to your client may be a little overwhelming so, to be really helpful, create a simple page that links together all your files and explains what each of them do. This is an example of a contents page that Clearleft produce for their clients. They’ve added date stamps, subversion revision numbers and written notes for each file. Encourage participation There’s always a risk that the person you’re writing the style guide for will ignore it completely, so make your documentation as user-friendly as possible. Justify why you do things a certain way to make it more approachable and encourage similar behaviour. As always, good communication helps. Working with the designer to put together this document will improve the site. It’s often not practical for designers to provide a style for everything, so drafting a web style guide and asking for feedback gives designers a chance to make sure any default styles fit in. If you work in a team with other developers, documenting your code and development decisions will not only be useful as a deliverable, but will also force you to think about why you do things a certain way. Future-friendly The roles of designer and developer are increasingly blurred, yet all too often we work in isolation. Working side-by-side with designers on web style guides can vastly improve the quality of our work, and the collaborative approach can spark discussions like “how would this work on a smaller screen?” Sometimes we can be so focused on getting the site ready and live, that we lose sight of what happens after it’s launched, and how it’s going to be maintained. A simple web style guide can make all the difference. If you make your own style guide, I’d love to add it to my stash of examples so please share a link to it in the comments. 2011 Anna Debenham annadebenham 2011-12-07T00:00:00+00:00 https://24ways.org/2011/front-end-style-guides/ process
281 Nine Things I've Learned I’ve been a professional graphic designer for fourteen years and for just under four of those a professional web designer. Like most designers I’ve learned a lot in my time, both from a design point of view and in business as freelance designer. A few of the things I’ve learned stick out in my mind, so I thought I’d share them with you. They’re pretty random and in no particular order. 1. Becoming the designer you want to be When I started out as a young graphic designer, I wanted to design posters and record sleeves, pretty much like every other young graphic designer. The problem is that the reality of the world means that when you get your first job you’re designing the back of a paracetamol packet or something equally weird. I recently saw a tweet that went something like this: “You’ll never become the designer you always dreamt of being by doing the work you never wanted to do”. This is so true; to become the designer you want to be, you need to be designing the things you’re passionate about designing. This probably this means working in the evenings and weekends for little or no money, but it’s time well spent. Doing this will build up your portfolio with the work that really shows what you can do! Soon, someone will ask you to design something based on having seen this work. From this point, you’re carving your own path in the direction of becoming the designer you always wanted to be. 2. Compete on your own terms As well as all being friends, we are also competitors. In order to win new work we need a selling point, preferably a unique selling point. Web design is a combination of design disciplines – user experience design, user interface Design, visual design, development, and so on. Some companies will sell themselves as UX specialists, which is fine, but everyone who designs a website from scratch does some sort of UX, so it’s not really a unique selling point. Of course, some people do it better than others. One area of web design that clients have a strong opinion on, and will judge you by, is visual design. It’s an area in which it’s definitely possible to have a unique selling point. Designing the visual aesthetic for a website is a combination of logical decision making and a certain amount of personal style. If you can create a unique visual style to your work, it can become a selling point that’s unique to you. 3. How much to charge and staying motivated When you’re a freelance designer one of the hardest things to do is put a price on your work and skills. Finding the right amount to charge is a fine balance between supplying value to your customer and also charging enough to stay motivated to do a great job. It’s always tempting to offer a low price to win work, but it’s often not the best approach: not just for yourself but for the client as well. A client once asked me if I could reduce my fee by £1,000 and still be motivated enough to do a good job. In this case the answer was yes, but it was the question that resonated with me. I realized I could use this as a gauge to help me price projects. Before I send out a quote I now always ask myself the question “Is the amount I’ve quoted enough to make me feel motivated to do my best on this project?” I never send out a quote unless the answer is yes. In my mind there’s no point in doing any project half-heartedly, as every project is an opportunity to build your reputation and expand your portfolio to show potential clients what you can do. Offering a client a good price but not being prepared to put everything you have into it, isn’t value for money. 4. Supplying the right design When I started out as a graphic designer it seemed to be the done thing to supply clients with a ton of options for their logo or brochure designs. In a talk given by Dan Rubin, he mentioned that this was a legacy of agencies competing with each other in a bid to create the illusion of offering more value for money. Over the years, I’ve realized that offering more than one solution makes no sense. The reason a client comes to you as a designer is because you’re the person than can get it right. If I were to supply three options, I’d be knowingly offering my client at least two options that I didn’t think worked. To this day I still get asked how many homepage design options I’ll supply for the quoted amount. The answer is one. Of course, I’m more than happy to iterate upon the design to fine-tune it and, on the odd occasion, I do revisit a design concept if I just didn’t nail the design first time around. Your time is much better spent refining the right design option than rushing out three substandard designs in the same amount of time. 5. Colour is key There are many contributing factors that go into making a good visual design, but one of the simplest ways to do this is through the use of colour. The colour palette used in a design can have such a profound effect on a visual design that it almost feels like you’re cheating. It’s easy to add more and more subtle shades of colour to add a sense of sophistication and complexity to a design, but it dilutes the overall visual impact. When I design, I almost have a rule that only allows me to use a very limited colour palette. I don’t always stick to it, but it’s always in mind and something I’m constantly reviewing through my design process. 6. Creative thinking is central to good or boundary-pushing web design When we think of creativity in web design we often link this to the visual design, as there is an obvious opportunity to be creative in this area if the brief allows it. Something that I’ve learnt in my time as a web designer is that there’s a massive need for creative thinking in the more technical aspects of web design. The tools we use for building websites are there to be manipulated and used in creative ways to design exciting and engaging user experiences. Great developers are constantly using their creativity to push the boundaries of what can be done with CSS, jQuery and JavaScript. Being creative and creative thinking are things we should embrace as an industry and they are qualities that can be found in anyone, whether they be a visual designer or Rails developer. 7. Creative block: don’t be afraid to get things wrong Creative block can be a killer when designing. It’s often applied to visual design, which is more subjective. I suffer from creative block on a regular basis. It’s hugely frustrating and can screw up your schedule. Having thought about what creative block actually is, I’ve come to the conclusion that it’s actually more of a lack of direction than a lack of ideas. You have ideas and solutions in mind but don’t feel committed to any of them. You’re scared that whatever direction you take, it’ll turn out to be wrong. I’ve found that the best remedy for this is to work through this barrier. It’s a bit like designing with a blindfold on – you don’t really know where you’re going. If you stick to your guns and keep pressing forward I find that, nine times out of ten, this process leads to a solution. As the page begins to fill, the direction you’re looking for slowly begins to take shape. 8. You get better at designing by designing I often get emails asking me what books someone can read to help them become a better designer. There are a lot of good books on subjects like HTML5, CSS, responsive web design and the like, that will really help improve anyone’s web design skills. But, when it comes to visual design, the best way to get better is to design as much as possible. You can’t follow instructions for these things because design isn’t following instructions. A large part of web design is definitely applying a set of widely held conventions, but there’s another part to it that is invention and the only way to get better at this is to do it as much as possible. 9. Self-belief is overrated Throughout our lives we’re told to have self-belief. Self-belief and confidence in what we do, whatever that may be. The problem is that some people find it easier than others to believe in themselves. I’ve spent years trying to convince myself to believe in what I do but have always found it difficult to have complete confidence in my design skills. Self-doubt always creeps in. I’ve realized that it’s ok to doubt myself and I think it might even be a good thing! I’ve realized that it’s my self-doubt that propels me forward and makes me work harder to achieve the best results. The reason I’m sharing this is because I know I’m not the only designer that feels this way. You can spend a lot of time fighting self-doubt only to discover that it’s your body’s natural mechanism to help you do the best job possible. 2011 Mike Kus mikekus 2011-12-11T00:00:00+00:00 https://24ways.org/2011/nine-things-ive-learned/ business
280 Conditional Loading for Responsive Designs On the eighteenth day of last year’s 24 ways, Paul Hammond wrote a great article called Speed Up Your Site with Delayed Content. He outlined a technique for loading some content — like profile avatars — after the initial page load. This gives you a nice performance boost. There’s another situation where this kind of delayed loading could be really handy: mobile-first responsive design. Responsive design combines three techniques: a fluid grid flexible images media queries At first, responsive design was applied to existing desktop-centric websites to allow the layout to adapt to smaller screen sizes. But more recently it has been combined with another innovative approach called mobile first. Rather then starting with the big, bloated desktop site and then scaling down for smaller devices, it makes more sense to start with the constraints of the small screen and then scale up for larger viewports. Using this approach, your layout grid, your large images and your media queries are applied on top of the pre-existing small-screen design. It’s taking progressive enhancement to the next level. One of the great advantages of the mobile-first approach is that it forces you to really focus on the core content of your page. It might be more accurate to think of this as a content-first approach. You don’t have the luxury of sidebars or multiple columns to fill up with content that’s just nice to have rather than essential. But what happens when you apply your media queries for larger viewports and you do have sidebars and multiple columns? Well, you can load in that nice-to-have content using the same kind of Ajax functionality that Paul described in his article last year. The difference is that you first run a quick test to see if the viewport is wide enough to accommodate the subsidiary content. This is conditional delayed loading. Consider this situation: I’ve published an article about cats and I’d like to include relevant cat-related news items in the sidebar …but only if there’s enough room on the screen for a sidebar. I’m going to use Google’s News API to return the search results. This is the ideal time to use delayed loading: I don’t want a third-party service slowing down the rendering of my page so I’m going to fire off the request after my document has loaded. Here’s an example of the kind of Ajax function that I would write: var searchNews = function(searchterm) { var elem = document.createElement('script'); elem.src = 'http://ajax.googleapis.com/ajax/services/search/news?v=1.0&q='+searchterm+'&callback=displayNews'; document.getElementsByTagName('head')[0].appendChild(elem); }; I’ve provided a callback function called displayNews that takes the JSON result of that Ajax request and adds it an element on the page with the ID newsresults: var displayNews = function(news) { var html = '', items = news.responseData.results, total = items.length; if (total>0) { for (var i=0; i<total; i++) { var item = items[i]; html+= '<article>'; html+= '<a href="'+item.unescapedUrl+'">'; html+= '<h3>'+item.titleNoFormatting+'</h3>'; html+= '</a>'; html+= '<p>'; html+= item.content; html+= '</p>'; html+= '</article>'; } document.getElementById('newsresults').innerHTML = html; } }; Now, I can call that function at the bottom of my document: <script> searchNews('cats'); </script> If I only want to run that search when there’s room for a sidebar, I can wrap it in an if statement: <script> if (document.documentElement.clientWidth > 640) { searchNews('cats'); } </script> If the browser is wider than 640 pixels, that will fire off a search for news stories about cats and put the results into the newsresults element in my markup: <div id="newsresults"> <!-- search results go here --> </div> This works pretty well but I’m making an assumption that people with small-screen devices wouldn’t be interested in seeing that nice-to-have content. You know what they say about assumptions: they make an ass out of you and umptions. I should really try to give everyone at least the option to get to that extra content: <div id="newsresults"> <a href="http://www.google.com/search?q=cats&tbm=nws">Search Google News</a> </div> See the result Visitors with small-screen devices will see that link to the search results; visitors with larger screens will get the search results directly. I’ve been concentrating on HTML and JavaScript, but this technique has consequences for content strategy and information architecture. Instead of thinking about possible page content in a binary way as either ‘on the page’ or ‘not on the page’, conditional loading introduces a third ‘it’s complicated’ option. This was just a simple example but I hope it illustrates that conditional loading could become an important part of the content-first responsive design approach. 2011 Jeremy Keith jeremykeith 2011-12-02T00:00:00+00:00 https://24ways.org/2011/conditional-loading-for-responsive-designs/ ux
279 Design the Invisible to Tell Better Stories on the Web For design to be meaningful we need to tell stories. We need to design the invisible, the cues, the messages and the extra detail hidden beneath the aesthetics. It’s all about the story. From verbal exchanges around the campfire to books, the web and everything in between, storytelling allows us to share, organize and process information more efficiently. It helps us understand our surroundings and make emotional connections to people, places and experiences. Web design lends itself perfectly to the conventions of storytelling, a universal process. However, the stories vary because they’re defined by culture, society, politics and religion. All of which need considering if you are to design stories that are relevant to your target audience. The benefits of approaching design with storytelling in mind from the very start of the project is that we are creating considered design that allows users to quickly gather meaning from the website. They do this by reading between the lines and drawing on the wealth of knowledge they have acquired about the associations between colours, typyefaces and signs. With so much recognition and analysis happening subconsciously you have to consider how design communicates on this level. This invisible layer has a significant impact on what you say, how you say it and who you say it to. How can we design something that’s invisible? By researching and making conscious decisions about exactly what you are communicating, you can make the invisible visible. As is often quoted, good design is like air, you only notice it when it’s bad. So by designing the invisible the aim is to design stories that the audience receive subliminally, so that they go somewhat unnoticed, like good air. Storytelling strands To share these stories through design, you can break it down into several strands. Each strand tells a story on its own, but when combined they may start to tell a different story altogether. These strands are colour, typefaces, branding, tone of voice and symbols. All are literal and visible but the invisible element is the meaning behind them – meaning that you can extract and share. In this article I want to focus on colour, typefaces and tone of voice and on how combining story strands can change the meaning. Colour Let’s start with colour. Red represents emotions such as love but can also signify war. Green is commonly used for all things environmental and purple is a colour that connotes wealth and royalty. These associations between colour and emotion or value have been learned over time and are continually reinforced through media and culture. With this knowledge come expectations from your users. For example, they will expect Valentine’s Day sites to be red and kids’ sites to be bright and colourful. This is true in the same way audiences have expectations of certain genres of film or music. These conventions help savvy audiences decode texts and read between the lines or, rather, to draw meaning from the invisible. It’s practically an innate skill. This is why you need to design the invisible: because users will quickly deduce meaning from your site and fill in the story’s gaps, it’s important to give them as much of that story to begin with. A story relevant to their culture. Of all the ways you can tell stories through web design, colour is the most fascinating and important. Not only does it evoke emotions in users but its meaning varies significantly between cultures. In the west, for example, white is a colour associated with weddings, and black is the colour of mourning. This is signified by the traditions of brides wearing white and those in mourning wearing black. In other cultures the meanings are reversed, as black is a colour that represents good luck and white is a colour that signifies mourning. If you assume the same values are true in all cultures then you risk offending the very people you are targeting. When colours combine, the story being told can change. If you design using red, white and blue then it’s going to be difficult to shake off patriotic connotations because this colour combination is so ingrained as being American or British or French thanks largely to their flags. This extends to politics too. Each party has its own representative colour. In the UK, the Conservatives are blue and Labour is red so it would be inappropriate storytelling to design a Labour-related website in blue as there would be a conflict between the content and the design, a conflict that would result in a poor user experience. Conflicts become more of an issue when you start to combine story strands. I once saw a No VAT advert use the symbol on the left: There’s a complete conflict in storytelling here between the sign and its colour. The prohibition sign was used over the word VAT to mean no VAT; that makes sense. But this is a symbol that is used to communicate to people that something is being prohibited or prevented, it mustn’t continue. So to use green contradicts the message of the sign itself; green is used as a colour to say yes, go, proceed, enter. The same would be true if we had a tick in red and a cross in green. Bad design here means the story is flawed and the user experience is compromised. Typefaces Typefaces also tell stories. They are so much more than the words that are written with them because they connote different values. Here are a few: Serif fonts are more formal and are associated with tradition, sophistication and high-end values. Sans serif fonts, on the other hand, are synonymous with modernity, informality and friendliness. These perceptions are again reinforced through more traditional media such as newspaper mastheads, where the serious news-focused broadsheets have serif titles, and the showbiz and gossip-led tabloids have sans serif titles. This translates to the web as well. With these associations already familiar to users, they may see copy and focus on the words, but if the way that copy is displayed jars with the context then we are back to having conflicting stories like the No VAT sign earlier. Let’s take official institutions, for example. The White House, the monarchy, 10 Downing Street and other government departments are formal, traditional and important organisations. If the copy on their websites were written in a typeface like Cooper Black, it would erase any authority and respect that they were due. They need people to take them seriously and trust them, and part of the way to do this is to have a typeface that represents those values. It works both ways though. If Innocent, Threadless or other fun companies used traditional typefaces, they wouldn’t be accurate reflections of their core values, brand and personality. They are better positioned to use friendly, informal and modern typefaces. But still never Comic Sans. Tone of voice Closely tied to this is tone of voice, my absolute bugbear on the web. Tone of voice isn’t what is said but, rather, how it is said. When we interact with others in person we don’t just listen to the words they say, but we also draw meaning from their body language, and pitch and tone of voice. Just because the web removes that face-to-face interaction with your audience it doesn’t mean you can’t have a tone of voice. Innocent pioneered the informal chatty tone of voice that so many others have since emulated, but unless it is representative of your company, then it’s not appropriate. It works for Innocent because the tone of voice is consistent across all the company’s materials, both online and offline. Ben and Jerry’s takes the same approach, as does Threadless, but maybe you need a more formal or corporate tone of voice. It really depends on what your business or service is and who it is for, and that is why I think LoveFilm has it all wrong. LoveFilm offers a film and game rental service, something fun for people in their downtime. While they aren’t particularly stuffy, neither is their tone of voice very friendly or informal, which is what I would expect from a service like theirs. The reason they have it wrong is in the language they use and the way their sentences are constructed. This is the first time we’ve discussed language because, on the whole, designing the invisible isn’t concerned with language at all. But that doesn’t mean that these strands can’t still elicit an emotional response in users. Jon Tan quoted Dr Mazviita Chirimuuta in his New Adventures in Web Design talk in January 2011: Although there is no absolute separation between language and emotion, there will still be countless instances where you have emotional response without verbal input or linguistic cognition. In general language is not necessary for emotion. This is even more pertinent when the emotions evoked are connected to people’s culture, surroundings and way of life. It makes design personal, something that audiences can connect with at more than just face value but, rather, on a subliminal or, indeed, invisible level. It also means that when you are asked the inevitable question of why – why is blue the dominant colour? why have you used that typeface? why don’t we sound like Innocent? – you will have a rationale behind each design decision that can explain what story you are telling, how you discovered the story and how it is targeted at the core audience. Research This is where research plays a vital role in the project cycle. If you don’t know and understand your audience then you don’t know what story to design. Every project lends itself to some level of research, but how in-depth and what methods are most appropriate will be dictated by project requirements and budget restrictions – but do your research. Even if you think you know your audience, it doesn’t hurt to research and reaffirm that because cultures and society do change, albeit slowly, but they can change. So ask questions at the start of the project during the research phase: What do different colours mean for your audience’s culture? Do the typeface and tone of voice appeal to the demographic? Does the brand identity represent the values and personality of your service? Are there any social, political or religious significances associated with your audience that you need to take into consideration so you don’t offend them? Ask questions, understand your audience, design your story based on these insights, and create better user experiences in context that have good, solid storytelling at their heart. Major hat tip to Gareth Strange for the beautiful graphics used within this article. 2011 Robert Mills robertmills 2011-12-14T00:00:00+00:00 https://24ways.org/2011/design-the-invisible/ design
278 Going Both Ways It’s that time of the year again: Santa is getting ready to travel the world. Up until now, girls and boys from all over have sent in letters asking for what they want. I hope that Santa and his elves have—unlike me—learned more than just English. On the Internet, those girls and boys want to participate in sharing their stories and videos of opening presents and of being with friends and family. Ah, yes, the wonders of user generated content. But more than that, people also want to be able to use sites in the language they know. While you and I might expect the text to read from left to right, not all languages do. Some go from right to left, such as Arabic and Hebrew. (Some also go from top to bottom, but for now, let’s just worry about those first two directions!) If we were building a site for girls and boys to send their letters to Santa, we need to consider having the interface in the language and direction that they prefer. On the elves’ side, they may be viewing the site in one direction but reading the user generated content in the other direction. We need to build a site that supports bidirectional (or bidi) text. Let’s take a look at some things to be aware of when it comes to building bidi interfaces. Setting the direction of the interface Right off the bat, we need to tell the browser what direction the text should be going in. To do this, we add the dir attribute to an HTML element and set it to either LTR (for left to right) or RTL (for right to left). <body dir="rtl"> You can add the dir attribute to any element and it will set or change the direction for the content within that element. <body dir="ltr"> Here is English Content. <div dir="rtl">الموضوع</div> </body> You can also set the direction via CSS. .rtl { direction: rtl; } It’s generally recommended that you don’t use CSS to set the direction of the text. Text direction is an important part of the content that should be retained even in environments where the CSS may not be available or fails to load. How things change with the direction attribute Just adding the dir attribute tells the browser to render the content within it differently. The text aligns to the right of the page and, interestingly, punctuation appears at the left of the sentence. (We’ll get to that in a little bit.) Scrollbars in most browsers will appear on the left instead of the right. Webkit is the notable exception here which always shows the scrollbar on the right, no matter what the text direction is. Avoid having a design that has an expectation that the scrollbar will be in a specific place (and a specific size). Changing the order of text mid-way As we saw in that previous example, the punctuation appeared at the beginning of the sentence instead of the end, even though the text was English. At Yahoo!, we have an interesting dilemma where the company name has punctuation in it. Therefore, when the name appears in the middle of (for example) Arabic text, the exclamation mark appears at the beginning of the word instead of the end. There are two ways in which this problem can be solved: 1. Use HTML around the left-to-right content, or To solve the problem of the Yahoo! name in the midst of Arabic text, we can wrap a span around it and change the direction on that element. 2. Use a text direction mark in the content. Unicode has two marks, U+200E and U+200F, that tell the browser that the text is in a particular direction. Placing this right after the punctuation will correct the placement. Using the HTML entity: Yahoo!‎ Tables Thankfully, the cells of a data table also get reordered from right to left. Equally as nice, if you’re using display:table, the content will still get reordered. CSS So far, we’ve seen that the dir attribute does a pretty decent job of getting content flowing in the direction that we need it. Unfortunately, there are huge swaths of design that is handled by CSS that the handy dir attribute has zero effect over. Many properties, like float or absolute positioning with left and right values, are unaffected and must be handled manually. Elements that were floated left must now by floated right. Left margins and paddings must now move to the right and the right margins and paddings must now move to the left. Since the browser won’t handle this for us, we have a couple approaches that we can use: CSS Only We can take advantage of the attribute selector to target CSS to apply in one direction or another. [dir=ltr] .module { float: left; margin: 0 0 0 20px; } [dir=rtl] .module { float: right; margin: 0 20px 0 0; } As you can see from this example, both of the properties have been modified for the flipped interface. If your interface is rather complicated, you will have to create a lot of duplicate rules to have the site looking good in both directions while serving up a single stylesheet. CSSJanus Google has a tool called CSSJanus. It’s a Python script that runs over the LTR versions of your CSS files and generates RTL versions. For the RTL version of the site, just serve up those CSS files instead of the LTR versions. The script looks for keywords and value combinations and automatically swaps them so you don’t have to. At Yahoo!, CSSJanus was a huge help in speeding up our development of a bidi interface. We’ve also made a number of improvements to the script to better handle border radius, background positioning, and gradients. We will be pushing those changes back into the CSSJanus project. Background Images Background images, especially for things like CSS sprites, also raise an interesting dilemma. Background images are positioned relative to the left of the element. In a flipped interface, however, we need to position it relative to the right. An icon that would be to the left of some text will now need to appear on the right. If the x position of the background is percentage-based, then it’s fairly easy to swap the values. 0 becomes 100%, 10% becomes 90% and so on. If the x position is pixel-based, then we’re in a bit of a pickle. There’s no way to say that the image should be a certain number of pixels from the right. Therefore, you’ll need to ensure that any background image that needs to be swapped should be percentage-based. (99.9% of the the time, the background position will need to be 0 so that it can be changed to 100% for RTL.) If you’re taking an existing implementation, background positioning will likely be the biggest hurdle you’ll have to overcome in swapping your interface around. If you make sure your x position is always percentage-based from the beginning, you’ll have a much smoother process ahead of you! Flipping Images This is a more subtle point and one where you’ll really want an expert with the region to weigh in on. In RTL interfaces, users may expect certain icons to also be flipped. Pencil icons that skew to the right in LTR interfaces might need to be swapped to skew to the left, instead. Chat bubbles that come from the left will need to come from the right. The easiest way to handle this is to create new images. Name the LTR versions with -ltr in the name and name the RTL versions with -rtl in the name. CSSJanus will automatically rename all file references from -ltr to -rtl. The Future Thankfully, those within the W3C recognize that CSS should be more agnostic. As a result, they’ve begun introducing new properties that allow the browser to manage the swapping from left to right for us. The CSS3 specification for backgrounds allows for the background-position to be relative to other corners other than the top left by specifying keywords before each position. This will position the background 5px from the bottom right of the element. background-position: right 5px bottom 5px; Opera 11.60 is currently the only browser that supports this syntax. For margin and padding, we have margin-start and margin-end. In LTR interfaces, margin-start would be the same as margin-left and in RTL interfaces, margin-start would be the same as margin-right. Firefox and Webkit support these but with vendor prefixes right now: -webkit-margin-start: 20px; -moz-margin-start: 20px; In the CSS3 Images working draft specification, there’s an image() property that allows us to specify image fallbacks and whether those fallbacks are for LTR or RTL interfaces. background: image('sprite.png' ltr, 'sprite-rtl.png' rtl); Unfortunately, no browser supports this yet but it’s nice to be able to dream of how much easier this will be in the future! Ho Ho Ho Hopefully, after all of this, you’re full of cheer knowing that you’re well on your way to creating interfaces that can go both ways! 2011 Jonathan Snook jonathansnook 2011-12-19T00:00:00+00:00 https://24ways.org/2011/going-both-ways/ ux
277 Raising the Bar on Mobile One of the primary challenges of designing for mobile devices is that screen real estate is often in limited supply. Through the advocacy of Luke W and others, we’ve drawn comfort from the idea that this constraint ends up benefiting users and designers alike, from obvious advantages like portability and reach, to influencing our content strategy decisions through focus and restraint. But that doesn’t mean we shouldn’t take advantage of every last pixel of that screen we can snag! As anyone who has designed a website for use on a smartphone can attest, there’s an awful lot of space on mobile screens dedicated to browser functions that would be better off toggled out of view. Unfortunately, the visibility of some of these elements is beyond our control, such as the buttons fixed to the bottom of the viewport in iOS’s Safari and the WebOS browser. However, in many devices, the address bar at the top can be manually hidden, and its absence frees up enough pixel room for a large, impactful heading, a critical piece of navigation, or even just a little more white space to air things out. So, as my humble contribution to this most festive of web publications, today I’ll dig into the approach I used to hide the address bar in a browser-agnostic fashion for sites like BostonGlobe.com, and the jQuery Mobile framework. Surveying the land First, let’s assess the chromes of some popular, current mobile browsers. For example purposes, the following screen-captures feature the homepage of the Boston Globe site, without any address-bar-hiding logic in place. Note: these captures are just mockups – actual experience on these platforms may vary. On the left is iOS5’s Safari (running on iPhone), and on the right is Windows Phone 7 (pre-Mango). BlackBerry 7 (left), and Android 2.3 (right). WebOS (left), Opera Mini (middle), and Opera Mobile (right). Some browsers, such the default browsers on WebOS and BlackBerry 5, hide the bar automatically without any developer intervention, but many of them don’t. Of these, we can only manually hide the address bar on iOS Safari and Android (according to Opera Web Opener, Mike Taylor, some discussion is underway for support in Opera Mini and Mobile as well, which would be great!). This is unfortunate, but iOS and Android are incredibly popular, so let’s direct our focus there. Great API, or greatest API? As it turns out, iOS and Android not only allow you to hide the address bar, they use the same JavaScript method to do so, too (this shouldn’t be surprising, given that they are both WebKit browsers, but nothing expected happens in mobile). However, the method they use is not exactly intuitive. You might set out looking for a JavaScript API dedicated to this purpose, like, say, window.toolbar.hide(), but alas, to hide the address bar you need to use the window.scrollTo method! window.scrollTo(0, 0); The scrollTo method is not new, it’s just this particular use of it that is. For the uninitiated, scrollTo is designed to scroll a document to a particular set of coordinates, assuming the document is large enough to scroll to that spot. The method accepts two arguments: a left coordinate; and a top coordinate. It’s both simple and supported well pretty much everywhere. In iOS and Android, these coordinates are calculated from the top of the browser’s viewport, just below the address bar (interestingly, it seems that some platforms like BlackBerry 6 treat the top of the browser chrome as 0 instead, meaning the page content is closer to 20px from the top). Anyway, by passing the coordinates 0, 0 to the scrollTo method, the browser will jump to the top of the page and pull the address bar out of view! Of course, if a quick call to scrollTo was all we need to do to hide the address bar in iOS and Android, this article would be pretty short, and nothing new. Unfortunately, the first issue we need to deal with is that this method alone will not usually do the trick: it must be called after the page has finished loading. The browser gives us a load event for just that purpose, so we’ll wrap our scrollTo method in it and continue on our merry way! We’ll use the standard, addEventListener method to bind the the load event, passing arguments for event name load, and a callback function to execute when the event is triggered. window.addEventListener("load",function() { window.scrollTo(0, 0); }); For the sake of preventing errors in those using browsers that don’t support addEventListener, such as Internet Explorer 8 and under, let’s make sure that method exists before we use it: if( window.addEventListener ){ window.addEventListener("load",function() { window.scrollTo(0, 0); }); } Now we’re getting somewhere, but we must also call the method after the load event’s default behavior has been applied. For this, we can use the setTimeout method, delaying its execution to after the load event has run its course. if( window.addEventListener ){ window.addEventListener("load",function() { setTimeout(function(){ window.scrollTo(0, 0); }, 0); }); } Sweet sugar of Christmas! Hit this demo in iOS and watch that address bar drift up and away! Not so fast… We’ve got a little problem: the approach above does work in iOS but, in some cases, it works a little too well. In the process of applying this behavior, we’ve broken one of the primary tenets of responsible web development: don’t break the browser’s default behaviour. This usability rule of thumb is often violated by developers with even the best of intentions, from breaking the browser’s back button through unrecorded Ajax page refreshes, to fancy momentum touch scrolling scripts that can wreak havoc in all but the most sophisticated of devices. In this case, we’ve prevented the browser’s native support of deep-linking to sections of a page (a hash identifier in the URL matching a page element’s id attribute, for example, http://example.com#contact) from working properly, because our script always scrolls to the top. To avoid this collision, we’ll need to detect whether a deep link, or hash, is present in the URL before applying our logic. We can do this by ensuring that the location.hash property is falsey: if( !window.location.hash && window.addEventListener ){ window.addEventListener( "load",function() { setTimeout(function(){ window.scrollTo(0, 0); }, 0); }); } Still works great! And a quick test using a hash-based URL confirms that our script will not execute when a deep anchor is in play. Now iOS is looking sharp, and we’ve added our feature defensively to avoid conflicts. Now, on to Android… Wait. You didn’t expect that we could write code for one browser and be finished, right? Of course you didn’t. I mentioned earlier that Android uses the same method for getting rid of the scrollbar, but I left out the fact that the arguments it prefers vary slightly, but significantly, from iOS. Bah! Differering from the earlier logic from iOS, to remove the address bar on Android’s default browser, you need to pass a Y coordinate of 1 instead of 0. Aside from being just plain odd, this is particularly unfortunate because to any other browser on the planet, 1px is a very real, however small, distance from the top of the page! window.scrollTo( 0, 1 ); Looks like we’re going to need a fork… R UA Android? At this point, some developers might decide to simply not support this feature in Android, and more determined devs might decide that a quick check of the User Agent string would be a reliable way to determine the browser and tweak the scroll value accordingly. Neither of those decisions would be tragic, but in the spirit of cross-browser and future-friendly development, I’ll propose an alternative. By this point, it should be clear that neither of the implementations above offer a particularly intuitive way to hide an address bar. As such, one might be skeptical that these approaches will stick around very long in their present state in either browser. Perhaps at some point, Android will decide to use 0 like iOS, making our lives a little easier, or maybe some new browser will decide to model their address bar hiding method after one of these implementations. In any case, detecting the User Agent only allows us to apply logic based on the known present, and in the world of mobile, let’s face it, the present is already the past. Writing a check In this next step of today’s technique, we’ll apply some logic to quickly determine the behavior model of the browser we’re using, then capitalize on that model – without caring which browser it happens to come from – by applying the appropriate scroll distance. To do this, we’ll rely on a fortunate side effect of Android’s implementation, which is when you programatically scroll the page to 1 using scrollTo, Android will report that it’s still at 0 because oddly enough, it is! Of course, any other browser in this situation will report a scroll distance of 1. Thus, by scrolling the page to 1, then asking the browser its scroll distance, we can use this artifact of their wacky implementation to our advantage and scroll to the location that makes sense for the browser in play. Getting the scroll distance To pull off our test, we’ll need to ask the browser for its current scroll distance. The methods for getting scroll distance are not entirely standardized across popular browsers, so we’ll need to use some cross-browser logic. The following scroll distance function is similar to what you’d find in a library like jQuery. It checks the few common ways of getting scroll distance before eventually falling back to 0 for safety’s sake (that said, I’m unaware of any browsers that won’t return a numeric value from one of the first three properties). // scrollTop getter function getScrollTop(){ return scrollTop = window.pageYOffset || document.compatMode === "CSS1Compat" && document.documentElement.scrollTop || document.body.scrollTop || 0; } In order to execute that code above, the body object (referenced here as document.body) will need to be defined already, or we’ll risk an error. To determine that it’s defined, we can run a quick timer to execute code as soon as that object is defined and ready for use. var bodycheck = setInterval(function(){ if( document.body ){ clearInterval( bodycheck ); //more logic can go here!! } }, 15 ); Above, we’ve defined a 15 millisecond interval called bodycheck that checks if document.body is defined and, if so, clears itself of running again. Within that if statement, we can extend our logic further to run other code, such as our check for the scroll distance, defined via the variable scrollTop below: var scrollTop, bodycheck = setInterval(function(){ if( document.body ){ clearInterval( bodycheck ); scrollTop = getScrollTop(); } }, 15 ); With this working, we can immediately scroll to 1, then check the scroll distance when the body is defined. If the distance reports 1, we’re likely in a non-Android browser, so we’ll scroll back to 0 and clean up our mess. window.scrollTo( 0, 1 ); var scrollTop, bodycheck = setInterval(function(){ if( document.body ){ clearInterval( bodycheck ); scrollTop = getScrollTop(); window.scrollTo( 0, scrollTop === 1 ? 0 : 1 ); } }, 15 ); Cashing in All of the pieces are written now, so all we need to do is combine them with our previous logic for scrolling when the window is loaded, and we’ll have a cross-browser solution of which John Resig would be proud. Here’s our combined code snippet, with some formatting updates rolled in as well: (function( win ){ var doc = win.document; // If there’s a hash, or addEventListener is undefined, stop here if( !location.hash && win.addEventListener ){ //scroll to 1 window.scrollTo( 0, 1 ); var scrollTop = 1, getScrollTop = function(){ return win.pageYOffset || doc.compatMode = "CSS1Compat" && doc.documentElement.scrollTop || doc.body.scrollTop || 0; }, //reset to 0 on bodyready, if needed bodycheck = setInterval(function(){ if( doc.body ){ clearInterval( bodycheck ); scrollTop = getScrollTop(); win.scrollTo( 0, scrollTop = 1 ? 0 : 1 ); } }, 15 ); win.addEventListener( “load”, function(){ setTimeout(function(){ //reset to hide addr bar at onload win.scrollTo( 0, scrollTop === 1 ? 0 : 1 ); }, 0); } ); } })( this ); View code example And with that, we’ve got a bunch more room to play with on both iOS and Android. Break out the eggnog …because we’re not done yet! In the spirit of making our script act more defensively, there’s still another use case to consider. It was essential that we used the window’s load event to trigger our scripting, but on pages with a lot of content, its use can come at a cost. Often, a user will begin interacting with a page, scrolling down as they read, before the load event has fired. In those situations, our script will jump the user back to the top of the page, resulting in a jarring experience. To prevent this problem from occurring, we’ll need to ensure that the page has not been scrolled beyond a certain amount. We can add a simple check using our getScrollTop function again, this time ensuring that its value is not greater than 20 pixels or so, accounting for a small tolerance. if( getScrollTop() < 20 ){ //reset to hide addr bar at onload window.scrollTo( 0, scrollTop === 1 ? 0 : 1 ); } And with that, we’re pretty well protected! Here’s a final demo. The completed script can be found on Github (full source: https://gist.github.com/1183357 ). It’s MIT licensed. Feel free to use it anywhere or any way you’d like! Your thoughts? I hope this article provides you with a browser-agnostic approach to hiding the address bar that you can use in your own projects today. Perhaps alternatively, the complications involved in this approach convinced you that doing this well is more trouble than it’s worth and, depending on the use case, that could be a fair decision. But at the very least, I hope this demonstrates that there’s a lot of work involved in pulling off this small task in only two major platforms, and that there’s a real need for standardization in this area. Feel free to leave a comment or criticism and I’ll do my best to answer in a timely fashion. Thanks, everyone! Some parting notes I scream, you scream… At the time of writing, I was not able to test this method on the latest Android 4.0 (Ice Cream Sandwich) build. According to Sencha Touch’s browser scorecard, the browser in 4.0 may have a different way of managing the address bar, so I’ll post in the comments once I get a chance to dig into it further. Short pages get no love Today’s technique only works when the page is as tall, or taller than, the device’s available screen height, so that the address bar may be scrolled out of view. On a short page, you might work around this issue by applying a minimum height to the body element ( body { min-height: 460px; } ), but given the variety of screen sizes out there, not to mention changes in orientation, it’s tough to find a value that makes much sense (unless you manipulate it with JavaScript). 2011 Scott Jehl scottjehl 2011-12-20T00:00:00+00:00 https://24ways.org/2011/raising-the-bar-on-mobile/ design
276 Your jQuery: Now With 67% Less Suck Fun fact: more websites are now using jQuery than Flash. jQuery is an amazing tool that’s made JavaScript accessible to developers and designers of all levels of experience. However, as Spiderman taught us, “with great power comes great responsibility.” The unfortunate downside to jQuery is that while it makes it easy to write JavaScript, it makes it easy to write really really f*&#ing bad JavaScript. Scripts that slow down page load, unresponsive user interfaces, and spaghetti code knotted so deep that it should come with a bottle of whiskey for the next sucker developer that has to work on it. This becomes more important for those of us who have yet to move into the magical fairy wonderland where none of our clients or users view our pages in Internet Explorer. The IE JavaScript engine moves at the speed of an advancing glacier compared to more modern browsers, so optimizing our code for performance takes on an even higher level of urgency. Thankfully, there are a few very simple things anyone can add into their jQuery workflow that can clear up a lot of basic problems. When undertaking code reviews, three of the areas where I consistently see the biggest problems are: inefficient selectors; poor event delegation; and clunky DOM manipulation. We’ll tackle all three of these and hopefully you’ll walk away with some new jQuery batarangs to toss around in your next project. Selector optimization Selector speed: fast or slow? Saying that the power behind jQuery comes from its ability to select DOM elements and act on them is like saying that Photoshop is a really good tool for selecting pixels on screen and making them change color – it’s a bit of a gross oversimplification, but the fact remains that jQuery gives us a ton of ways to choose which element or elements in a page we want to work with. However, a surprising number of web developers are unaware that all selectors are not created equal; in fact, it’s incredible just how drastic the performance difference can be between two selectors that, at first glance, appear nearly identical. For instance, consider these two ways of selecting all paragraph tags inside a <div> with an ID. $("#id p"); $("#id").find("p"); Would it surprise you to learn that the second way can be more than twice as fast as the first? Knowing which selectors outperform others (and why) is a pretty key building block in making sure your code runs well and doesn’t frustrate your users waiting for things to happen. There are many different ways to select elements using jQuery, but the most common ways can be basically broken down into five different methods. In order, roughly, from fastest to slowest, these are: $("#id"); This is without a doubt the fastest selector jQuery provides because it maps directly to the native document.getElementbyId() JavaScript method. If possible, the selectors listed below should be prefaced with an ID selector in conjunction with jQuery’s .find() method to limit the scope of the page that has to be searched (as in the $("#id").find("p") example shown above). $("p");, $("input");, $("form"); and so on Selecting elements by tag name is also fast, since it maps directly to the native document.getElementsByTagname() method. $(".class"); Selecting by class name is a little trickier. While still performing very well in modern browsers, it can cause some pretty significant slowdowns in IE8 and below. Why? IE9 was the first IE version to support the native document.getElementsByClassName() JavaScript method. Older browsers have to resort to using much slower DOM-scraping methods that can really impact performance. $("[attribute=value]"); There is no native JavaScript method for this selector to use, so the only way that jQuery can perform the search is by crawling the entire DOM looking for matches. Modern browsers that support the querySelectorAll() method will perform better in certain cases (Opera, especially, runs these searches much faster than any other browser) but, generally speaking, this type of selector is Slowey McSlowersons. $(":hidden"); Like attribute selectors, there is no native JavaScript method for this one to use. Pseudo-selectors can be painfully slow since the selector has to be run against every element in your search space. Again, modern browsers with querySelectorAll() will perform slightly better here, but try to avoid these if at all possible. If you must use one, try to limit the search space to a specific portion of the page: $("#list").find(":hidden"); But, hey, proof is in the performance testing, right? It just so happens that said proof is sitting right here. Be sure to notice the class selector numbers beside IE7 and 8 compared to other browsers and then wonder how the people on the IE team at Microsoft manage to sleep at night. Yikes. Chaining Almost all jQuery methods return a jQuery object. This means that when a method is run, its results are returned and you can continue executing more methods on them. Rather than writing out the same selector multiple times over, just making a selection once allows multiple actions to be run on it. Without chaining $("#object").addClass("active"); $("#object").css("color","#f0f"); $("#object").height(300); With chaining $("#object").addClass("active").css("color", "#f0f").height(300); This has the dual effect of making your code shorter and faster. Chained methods will be slightly faster than multiple methods made on a cached selector, and both ways will be much faster than multiple methods made on non-cached selectors. Wait… “cached selector”? What is this new devilry? Caching Another easy way to speed up your code that seems to be a mystery to developers is the idea of caching your selectors. Think of how many times you end up writing the same selector over and over again in any project. Every $(".element") selector has to search the entire DOM each time, regardless of whether or not that selector had been previously run. Running the selection once and then storing the results in a variable means that the DOM only has to be searched once. Once the results of a selector have been cached, you can do anything with them. First, run your search (here we’re selecting all of the <li> elements inside <ul id="blocks">): var blocks = $("#blocks").find("li"); Now, you can use the blocks variable wherever you want without having to search the DOM every time. $("#hideBlocks").click(function() { blocks.fadeOut(); }); $("#showBlocks").click(function() { blocks.fadeIn(); }); My advice? Any selector that gets run more than once should be cached. This jsperf test shows just how much faster a cached selector runs compared to a non-cached one (and even throws some chaining love in to boot). Event delegation Event listeners cost memory. In complex websites and apps it’s not uncommon to have a lot of event listeners floating around, and thankfully jQuery provides some really easy methods for handling event listeners efficiently through delegation. In a bit of an extreme example, imagine a situation where a 10×10 cell table needs to have an event listener on each cell; let’s say that clicking on a cell adds or removes a class that defines the cell’s background color. A typical way that this might be written (and something I’ve often seen during code reviews) is like so: $('table').find('td').click(function() { $(this).toggleClass('active'); }); jQuery 1.7 has provided us with a new event listener method, .on(). It acts as a utility that wraps all of jQuery’s previous event listeners into one convenient method, and the way you write it determines how it behaves. To rewrite the above .click() example using .on(), we’d simply do the following: $('table').find('td').on('click',function() { $(this).toggleClass('active'); }); Simple enough, right? Sure, but the problem here is that we’re still binding one hundred event listeners to our page, one to each individual table cell. A far better way to do things is to create one event listener on the table itself that listens for events inside it. Since the majority of events bubble up the DOM tree, we can bind a single event listener to one element (in this case, the <table>) and wait for events to bubble up from its children. The way to do this using the .on() method requires only one change from our code above: $('table').on('click','td',function() { $(this).toggleClass('active'); }); All we’ve done is moved the td selector to an argument inside the .on() method. Providing a selector to .on() switches it into delegation mode, and the event is only fired for descendants of the bound element (table) that match the selector (td). With that one simple change, we’ve gone from having to bind one hundred event listeners to just one. You might think that the browser having to do one hundred times less work would be a good thing and you’d be completely right. The difference between the two examples above is staggering. (Note that if your site is using a version of jQuery earlier than 1.7, you can accomplish the very same thing using the .delegate() method. The syntax of how you write the function differs slightly; if you’ve never used it before, it’s worth checking the API docs for that page to see how it works.) DOM manipulation jQuery makes it very easy to manipulate the DOM. It’s trivial to create new nodes, insert them, remove other ones, move things around, and so on. While the code to do this is simple to write, every time the DOM is manipulated, the browser has to repaint and reflow content which can be extremely costly. This is no more evident than in a long loop, whether it be a standard for() loop, while() loop, or jQuery $.each() loop. In this case, let’s say we’ve just received an array full of image URLs from a database or Ajax call or wherever, and we want to put all of those images in an unordered list. Commonly, you’ll see code like this to pull this off: var arr = [reallyLongArrayOfImageURLs]; $.each(arr, function(count, item) { var newImg = '<li><img src="'+item+'"></li>'; $('#imgList').append(newImg); }); There are a couple of problems with this. For one (which you should have already noticed if you’ve read the earlier part of this article), we’re making the $("#imgList") selection once for each iteration of our loop. The other problem here is that each time the loop iterates, it’s adding a new <li> to the DOM. Each of those insertions is going to be costly, and if our array is quite large then this could lead to a massive slowdown or even the dreaded ‘A script is causing this page to run slowly’ warning. var arr = [reallyLongArrayOfImageURLs], tmp = ''; $.each(arr, function(count, item) { tmp += '<li><img src="'+item+'"></li>'; }); $('#imgList').append(tmp); All we’ve done here is create a tmp variable that each <li> is added to as it’s created. Once our loop has finished iterating, that tmp variable will contain all of our list items in memory, and can be appended to our <ul> all in one go. Browsers work much faster when working with objects in memory rather than on screen, so this is a much faster, more CPU-cycle-friendly method of building a list. Wrapping up These are far from being the only ways to make your jQuery code run better, but they are among the simplest ones to implement. Though each individual change may only make a few milliseconds of difference, it doesn’t take long for those milliseconds to add up. Studies have shown that the human eye can discern delays of as few as 100ms, so simply making a few changes sprinkled throughout your code can very easily have a noticeable effect on how well your website or app performs. Do you have other jQuery optimization tips to share? Leave them in the comments and help make us all better. Now go forth and make awesome! 2011 Scott Kosman scottkosman 2011-12-13T00:00:00+00:00 https://24ways.org/2011/your-jquery-now-with-less-suck/ code
275 Context First: Web Strategy in Four Handy Ws Many, many years ago, before web design became my proper job, I trained and worked as a journalist. I studied publishing in London and spent three fun years learning how to take a few little nuggets of information and turn them into a story. I learned a bunch of stuff that has all been a huge help to my design career. Flatplanning, layout, typographic theory. All of these disciplines have since translated really well to web design, but without doubt the most useful thing I learned was how to ask difficult questions. Pretty much from day one of journalism school they hammer into you the importance of the Five Ws. Five disarmingly simple lines of enquiry that eloquently manage to provide the meat of any decent story. And with alliteration thrown in too. For a young journo, it’s almost too good to be true. Who? What? Where? When? Why? It seems so obvious to almost be trite but, fundamentally, any story that manages to answer those questions for the reader is doing a pretty good job. You’ll probably have noticed feeling underwhelmed by certain news pieces in the past – disappointed, like something was missing. Some irritating oversight that really lets the story down. No doubt it was one of the Ws – those innocuous little suckers are generally only noticeable by their absence, but they sure get missed when they’re not there. Question everything I’ve always been curious. An inveterate tinkerer with things and asker of dopey questions, often to the point of abject annoyance for anyone unfortunate enough to have ended up in my line of fire. So, naturally, the Five Ws started drifting into other areas of my life. I’d scrutinize everything, trying to justify or explain my rationale using these Ws, but I’d also find myself ripping apart the stuff that clearly couldn’t justify itself against the same criteria. So when I started working as a designer I applied the same logic and, sure enough, the Ws pretty much mapped to the exact same needs we had for gathering requirements at the start of a project. It seemed so obvious, such a simple way to establish the purpose of a product. What was it for? Why we were making it? And, of course, who were we making it for? It forced clients to stop and think, when really what they wanted was to get going and see something shiny. Sometimes that was a tricky conversation to have, but it’s no coincidence that those who got it also understood the value of strategy and went on to have good solid products, while those that didn’t often ended up with arrogantly insular and very shiny but ultimately unsatisfying and expendable products. Empty vessels make the most noise and all that… Content first I was both surprised and pleased when the whole content first idea started to rear its head a couple of years back. Pleased, because without doubt it’s absolutely the right way to work. And surprised, because personally it’s always been the way I’ve done it – I wasn’t aware there was even an alternative way. Content in some form or another is the whole reason we were making the things we were making. I can’t even imagine how you’d start figuring out what a site needs to do, how it should be structured, or how it should look without a really good idea of what that content might be. It baffles me still that this was somehow news to a lot of people. What on earth were they doing? Design without purpose is just folly, surely? It’s great to see the idea gaining momentum but, having watched it unfold, it occurred to me recently that although it’s fantastic to see a tangible shift in thinking – away from those bleak times, where making things up was somehow deemed an appropriate way to do things – there’s now a new bad guy in town. With any buzzword solution of the moment, there’s always a catch, and it seems like some have taken the content first approach a little too literally. By which I mean, it’s literally the first thing they do. The project starts, there’s a very cursory nod towards gathering requirements, and off they go, cranking content. Writing copy, making video, commissioning illustrations. All that’s happened is that the ‘making stuff up’ part has shifted along the line, away from layout and UI, back to the content. Starting is too easy I can’t remember where I first heard that phrase, but it’s a great sentiment which applies to so much of what we do on the web. The medium is so accessible and to an extent disposable; throwing things together quickly carries far less burden than in any other industry. We’re used to tweaking as we go, changing bits, iterating things into shape. The ubiquitous beta tag has become the ultimate caveat, and has made the unfinished and unpolished acceptable. Of course, that can work brilliantly in some circumstances. Occasionally, a product offers such a paradigm shift it’s beyond the level of deep planning and prelaunch finessing we’d ideally like. But, in the main, for most client sites we work on, there really is no excuse not to do things properly. To ask the tricky questions, to challenge preconceptions and really understand the Ws behind the products we’re making before we even start. The four Ws For product definition, only four of the five Ws really apply, although there’s a lot of discussion around the idea of when being an influencing factor. For example, the context of a user’s engagement with your product is something you can make a call on depending on the specifics of the project. So, here’s my take on the four essential Ws. I’ll point out here that, of course, these are not intended to be autocratic dictums. Your needs may differ, your clients’ needs may differ, but these four starting points will get you pretty close to where you need to be. Who It’s surprising just how many projects start without a real understanding of the intended audience. Many clients think they have an idea, but without really knowing – it’s presumptive at best, and we all know what presumption is the mother of, right? Of course, we can’t know our audiences in the same way a small shop owner might know their customers. But we can at least strive to find out what type of people are likely to be using the product. I’m not talking about deep user research. That should come later. These are the absolute basics. What’s the context for their visit? How informed are they? What’s their level of comprehension? Are they able to self-identify and relate to categories you have created? I could go on, and it changes on a per-project basis. You’ll only find this out by speaking to them, if not in person, then indirectly through surveys, questionnaires or polls. The mechanism is less important than actually reaching out and engaging with them, because without that understanding it’s impossible to start to design with any empathy. What Once you become deeply involved directly with a product or service, it’s notoriously difficult to see things as an outsider would. You learn the thing inside and out, you develop shortcuts and internal phraseology. Colloquialisms creep in. You become too close. So it’s no surprise when clients sometimes struggle to explain what it is their product actually does in a way that others can understand. Often products are complex but, really, the core reasons behind someone wanting to use that product are very simple. There’s a value proposition for the customer and, if they choose to engage with it, there’s a value exchange. If that proposition or exchange isn’t transparent, then people become confused and will likely go elsewhere. Make sure both your client and you really understand what that proposition is and, in turn, what the expected exchange should be. In a nutshell: what is the intended outcome of that engagement? Often the best way to do this is strip everything back to nothing. Verbosity is rife on the web. Just because it’s easy to create content, that shouldn’t be a reason to do so. Figure out what the value proposition is and then reintroduce content elements that genuinely help explain or present that to a level that is appropriate for the audience. Why In advertising, they talk about the truths behind a product or service. Truths can be both tangible or abstract, but the most important part is the resonance those truths hit with a customer. In a digital product or service those truths are often exposed as benefits. Why is this what I need? Why will it work for me? Why should I trust you? The why is one of the more fluffy Ws, yet it’s such an important one to nail. Clients can get prickly when you ask them to justify the why behind their product, but it’s a fantastic way to make sure the value proposition is clear, realistic and meets with the expectations of both client and customer. It’s our job as designers to question things: we’re not just a pair of hands for clients. Just recently I spoke to a potential client about a site for his business. I asked him why people would use his product and also why his product seemed so fractured in its direction. He couldnt answer that question so, instead of ploughing on regardless, he went back to his directors and is now re-evaluating that business. It was awkward but he thanked me and hopefully he’ll have a better product as a result. Where In this instance, where is not so much a geographical thing, although in some cases that level of context may indeed become a influencing factor… The where we’re talking about here is the position of the product in relation to others around it. By looking at competitors or similar services around the one you are designing, you can start to get a sense for many of the things that are otherwise hard to pin down or have yet to be defined. For example, in a collection of sites all selling cars, where does yours fit most closely? Where are the overlaps? How are they communicating to their customers? How is the product range presented or categorized? It’s good to look around and see how others are doing it. Not in a quest for homogeneity but more to reference or to avoid certain patterns that may or may not make sense for your own particular product. Clients often strive to be different for the sake of it. They feel they need to provide distinction by going against the flow a bit. We know different. We know users love convention. They embrace familiar mental models. They’re comfortable with things that they’ve experienced elsewhere. By showing your client that position is a vital part of their strategy, you can help shape their product into something great. To conclude So there we have it – the four Ws. Each part tells a different and vital part of the story you need to be able to make a really good product. It might sound like a lot of work, particularly when the client is breathing down your neck expecting to see things, but without those pieces in place, the story you’re building your product on, and the content that you’re creating to form that product can only ever fit into one genre. Fiction. 2011 Alex Morris alexmorris 2011-12-10T00:00:00+00:00 https://24ways.org/2011/context-first/ content
274 Adaptive Images for Responsive Designs So you’ve been building some responsive designs and you’ve been working through your checklist of things to do: You started with the content and designed around it, with mobile in mind first. You’ve gone liquid and there’s nary a px value in sight; % is your weapon of choice now. You’ve baked in a few media queries to adapt your layout and tweak your design at different window widths. You’ve made your images scale to the container width using the fluid Image technique. You’ve even done the same for your videos using a nifty bit of JavaScript. You’ve done a good job so pat yourself on the back. But there’s still a problem and it’s as tricky as it is important: image resolutions. HTML has an <img> problem CSS is great at adapting a website design to different window sizes – it allows you not only to tweak layout but also to send rescaled versions of the design’s images. And you want to do that because, after all, a smartphone does not need a 1,900-pixel background image1. HTML is less great. In the same way that you don’t want CSS background images to be larger than required, you don’t want that happening with <img>s either. A smartphone only needs a small image but desktop users need a large one. Unfortunately <img>s can’t adapt like CSS, so what do we do? Well, you could just use a high resolution image and the fluid image technique would scale it down to fit the viewport; but that’s sending an image five or six times the file size that’s really needed, which makes it slow to download and unpleasant to use. Smartphones are pretty impressive devices – my ancient iPhone 3G is more powerful in every way than my first proper computer – but they’re still terribly slow in comparison to today’s desktop machines. Sending a massive image means it has to be manipulated in memory and redrawn as you scroll. You’ll find phones rapidly run out of RAM and slow to a crawl. Well, OK. You went mobile first with everything else so why not put in mobile resolution <img>s too? Because even though mobile devices are rapidly gaining share in your analytics stats, they’re still not likely to be the major share of your user base. I don’t think desktop users would be happy with pokey little mobile resolution images, do you? What we need are adaptive images. Adaptive image techniques There are a number of possible solutions, each with pros and cons, and it’s not as simple to find a graceful solution as you might expect. Your first thought might be to use JavaScript to trawl through the markup and rewrite the source attribute. That’ll get you the right end result, but it’ll have done it in a way you absolutely don’t want. That’s because of the way browsers load resources. It starts to load the HTML and builds the page on-the-fly; as soon as it finds an <img> element it immediately asks the server for that image. After the HTML has finished loading, the JavaScript will run, change the src attribute, and then the browser will request that new image too. Not instead of, but as well as. Not good: that’s added more bloat instead of cutting it. Plain JavaScript is out then, which is a problem, because what other tools do we have to work with as web designers? Let’s ignore that for now and I’ll outline another issue with the concept of serving different resolution images for different window widths: a basic file management problem. To request a different image, that image has to exist on the server. How’s it going to get there? That’s not a trivial problem, especially if you have non-technical users that update content through a CMS. Let’s say you solve that – do you plan on a simple binary switch: big image|little image? Is that really efficient or future-proof? What happens if you have an archive of existing content that needs to behave this way? Can you apply such a solution to existing content or markup? There’s a detailed round-up of potential techniques for solving the adaptive images problem over at the Cloud Four blog if you fancy a dig around exploring all the options currently available. But I’m here to show you what I think is the most flexible and easy to implement solution, so here we are. Adaptive Images Adaptive Images aims to mitigate most of the issues surrounding the problems of bringing the venerable <img> tag into the 21st century. And it works by leaving that tag completely alone – just add that desktop resolution image into the markup as you’ve been doing for years now. We’ll fix it using secret magic techniques and bottled pixie dreams. Well, fine: with one .htaccess file, one small PHP file and one line of JavaScript. But you’re killing the mystique with that kind of talk. So, what does this solution do? It allows <img>s to adapt to the same break points you use in your media queries, giving granular control in the same way you get with your CSS. It installs on your server in five minutes or less and after that is automatic and you don’t need to do anything. It generates its own rescaled images on the server and doesn’t require markup changes, so you can apply it to existing web content. If you wish, it will make all of your images go mobile-first (just in case that’s what you want if JavaScript and cookies aren’t available). Sound good? I hope so. Here’s what you do. Setting up and rolling out I’ll assume you have some basic server knowledge along with that wealth of front-end wisdom exploding out of your head: that you know not to overwrite any existing .htaccess file for example, and how to set file permissions on your server. Feeling up to it? Excellent. Download the latest version of Adaptive Images either from the website or from the GitHub repository. Upload the included .htaccess and adaptive-images.php files into the root folder of your website. Create a directory called ai-cache and make sure the server can write to it (CHMOD 755 should do it). Add the following line of JavaScript into the <head> of your site: <script>document.cookie='resolution='+Math.max(screen.width,screen.height)+'; path=/‘;</script> That’s it, unless you want to tweak the default settings. You likely do, but essentially you’re already up and running. How it works Adaptive Images does a number of things depending on the scenario the script has to handle, but here’s a basic overview of what it does when you load a page running it: A session cookie is written with the value of the visitor’s screen size in pixels. The HTML encounters an <img> tag and sends a request to the server for that image. It also sends the cookie, because that’s how browsers work. Apache sits on the server and receives the request for the image. Apache then has a look in the .htaccess file to see if there are any special instructions for files in the requested URL. There are! The .htaccess says “Hey, server! Any request you get for a JPG, GIF or PNG file just send to the adaptive-images.php file instead.” The PHP file then does some intelligent thinking which can cover a number of scenarios, but I’ll illustrate one path that can happen: The PHP file looks for the cookie and finds out that the user has a maximum screen width of 480px. The PHP has a look at the available media query sizes that were configured and decides which one matches the user’s device. It then has a look inside the /ai-cache/480/ folder to see if a rescaled image already exists there. We’ll pretend it doesn’t – the PHP then goes to the actual requested URI and finds that the original file does exist. It has a look to see how wide that image is. If it’s already smaller than the user’s screen width it sends it along and stops there. But, let’s pretend the image is 1,000px wide. The PHP then resizes the image and saves it into the /ai-cache/480 folder ready for the next time someone needs it. It also does a few other things when needs arise, for example: It sends images with a cache header field that tells proxies not to cache the image, while telling browsers they should. This avoids problems with proxy servers and network caching systems grabbing the wrong image and storing it. It handles cases where there isn’t a cookie set, and you can choose whether to then send the mobile version or the largest configured media query size. It compares timestamps between the source image and the generated cache image – to ensure that if the source image gets updated, the old cached file won’t be sent. Customizing There are a few options you can customize if you don’t like the default values. By looking in the PHP’s configuration section at the top of the file, you can: Set the resolution breakpoints to match your media query break points. Change the name and location of the ai-cache folder. Change the quality level any generated JPG images are saved at. Have it perform a subtle sharpen on rescaled images to help keep detail. Toggle whether you want it to compare the files in the cache folder with the source ones or not. Set how long the browser should cache the images for. Switch between a mobile-first or desktop-first approach when a cookie isn’t found. More importantly, you probably want to omit a few folders from the AI behaviour. You don’t need or want it resizing the images you’re using in your CSS, for example. That’s fine – just open up the .htaccess file and follow the instructions to list any directories you want AI to ignore. Or, if you’re a dab hand at RewriteRules you can remove the exclamation mark at the start of the rule and it’ll only apply AI behaviour to a given list of folders. Caveats As I mentioned, I think this is one of the most flexible, future-proof, retrofittable and easy to use solutions available today. But, there are problems with this approach as there are with all of the ones I’ve seen so far. This is a PHP solution I wish I was smarter and knew some fancy modern languages the cool kids discuss at parties, but I don’t. So, you need PHP on your server. That said, Adaptive Images has a Creative Commons licence2 and I would welcome anyone to contribute a port of the code3. Content delivery networks Adaptive Images relies on the server being able to: intercept requests for images; do some logic; and send one of a given number of responses. Content delivery networks are generally dumb caches, and they won’t allow that to happen. Adaptive Images will not work if you’re using a CDN to deliver your website. A minor but interesting cookie issue. As Yoav Weiss pointed out in his article Preloaders, cookies and race conditions, there is no way to guarantee that a cookie will be set before images are requested – even though the JavaScript that sets the cookie is loaded by the browser before it finds any <img> tags. That could mean images being requested without a cookie being available. Adaptive Images has a two-fold mechanism to avoid this being a problem: The $mobile_first toggle allows you to choose what to send to a browser if a cookie isn’t set. If FALSE then it will send the highest configured resolution; if TRUE it will send the lowest. Even if set to TRUE, Adaptive Images checks the User Agent String. If it discovers the user is on a desktop environment, it will override $mobile_first and set it to FALSE. This means that if $mobile_first is set to TRUE and the user was unlucky (their browser didn’t write the cookie fast enough), mobile devices will be supplied with the smallest image, and desktop devices will get the largest. The best way to get a cookie written is to use JavaScript as I’ve explained above, because it’s the fastest way. However, for those that want it, there is a JavaScript-free method which uses CSS and a bogus PHP ‘image’ to set the cookie. A word of caution: because it requests an external file, this method is slower than the JavaScript one, and it is very likely that the cookie won’t be set until after images have been requested. The future For today, this is a pretty good solution. It works, and as it doesn’t interfere with your markup or source material in any way, the process is non-destructive. If a future solution is superior, you can just remove the Adaptive Images files and you’re good to go – you’d never know AI had been there. However, this isn’t really a long-term solution, not least because of the intermittent problem of the cookie and image request race condition. What we really need are a number of standardized ways to handle this in the future. First, we could do with browsers sending far more information about the user’s environment along with each HTTP request (device size, connection speed, pixel density, etc.), because the way things work now is no longer fit for purpose. The web now is a much broader entity used on far more diverse devices than when these technologies were dreamed up, and we absolutely require the server to have better knowledge about device capabilities than is currently possible. Relying on cookies to do this job doesn’t cut it, and the User Agent String is a complete mess incapable of fulfilling the various purposes we are forced to hijack it for. Secondly, we need a W3C-backed markup level solution to supply semantically different content at different resolutions, not just rescaled versions of the same content as Adaptive Images does. I hope you’ve found this interesting and will find Adaptive Images useful. Footnotes 1 While I’m talking about preventing smartphones from downloading resources they don’t need: you should be careful of your media query construction if you want to stop WebKit downloading all the images in all of the CSS files. 2 Adaptive Images has a very broad Creative Commons licence and I warmly welcome feedback and community contributions via the GitHub repository. 3 There is a ColdFusion port of an older version of Adaptive Images. I do not have anything to do with ported versions of Adaptive Images. 2011 Matt Wilcox mattwilcox 2011-12-04T00:00:00+00:00 https://24ways.org/2011/adaptive-images-for-responsive-designs/ ux
273 There’s No Formula for Great Designs Before he combined them with fluid images and CSS3 media queries to coin responsive design, Ethan Marcotte described fluid grids — one of the most enjoyable parts of responsive design. Enjoyable that is, if you like working with math(s). But fluid grids aren’t perfect and, unless we’re careful when applying them, they can sometimes result in a design that feels disconnected. Recapping fluid grids If you haven’t read Ethan’s Fluid Grids, now would be a good time to do that. It centres around a simple formula for converting pixel widths to percentages: (target ÷ context) × 100 = result How does that work in practice? Well, take that Fireworks or Photoshop comp you’re working on (I call them static design visuals, or just visuals.) Of course, everything on that visual — column divisions, inline images, navigation elements, everything — is measured in pixels. Now: Pick something in the visual and measure its width. That’s our target. Take that target measurement and divide it by the width of its parent (context). Multiply what you’ve got by 100 (shift two decimal places). What you’re left with is a percentage width to drop into your style sheets. For example, divide this 300px wide sidebar division by its 948px parent and then multiply by 100: your original 300px is neatly converted to 31.646%. .content-sub { width : 31.646%; /* 300px ÷ 948px = .31646 */ } That formula makes it surprisingly simple for even die-hard fixed width aficionados to convert their visuals to percentage-based, fluid layouts. It’s a handy formula for those who still design using static visuals, and downright essential for those situations where one person in an organization designs in Fireworks or Photoshop and another develops with CSS. Why? Well, although I think that designing in a browser makes the best sense — particularly when designing for multiple devices — I’ll wager most designers still make visuals in Fireworks or Photoshop and use them for demonstrations and get feedback and sign-off. That’s OK. If you haven’t made the transition to content-out designing in a browser yet, the fluid grids formula helps you carry on pushing pixels a while longer. You can carry on moving pixel width measurements from your visuals to your style sheets, too, in the same way you always have. You can be precise to the pixel and even apply a grid image as a CSS background to help you keep everything lined up perfectly. Once you’re done, and the fixed width layout in the browser matches your visual, loop back through your style sheets and convert those pixels to percentages using the fluid grids formula. With very little extra work, you’ll have a fluid implementation of your fixed width layout. The fluid grids formula is simple and incredibly effective, but not long after I started working responsively I realized that the formula shouldn’t (always) be a one-fix, set-and-forget calculation. I noticed that unless we compensate for problems it sometimes creates, the result can be a disconnected design. Staying connected Good design relies on connectedness, a feeling of natural balance between elements and the grid they’re placed on. Give an element greater prominence or position in a visual hierarchy and you can fundamentally alter the balance and sometimes the meaning of a design. Different from a browser’s page zooming feature — where images, text and overall layout change size by the same ratio — fluid grids flex a layout in response to a window or device width. Columns expand and contract, and within them fluid media (images and videos) can also change size. This can be one of the most impressive demonstrations of responsive design. But not every element within a fluid grid can change size along with the window or device width. For example, type size and leading won’t change along with a column’s width. When columns and elements within them change width, all too easily a visual hierarchy can be broken and along with it the relationship between element sizes and the outer window or viewport. This can happen quickly if you make just one set of fluid grid calculations and use those percentages across every screen width, from smartphones through tablets and up to large desktops. The answer? Make several sets of fluid grids calculations, each one at a significant window or device width breakpoint. Then apply those new percentages, when needed, to help keep elements in proportion and maintain balance and connectedness. Here’s how I work. Avoiding disconnection I’ve never been entirely happy with grid frameworks such as the 960 Grid System, so I start almost every project by creating a custom grid to inform my layout decisions. Here’s a plain version of a grid from a recent project that I’ll use as an illustration. This project’s grid comprises 84px columns and 24px gutters. This creates an odd number of columns at common tablet and desktop widths, and allows for 300px fixed width assets — useful when I need to fit advertising into a desktop layout’s sidebar. Showing common advertising sizes (Larger image) For this project I chose three 320 and Up breakpoints above 320px and, after placing as many columns as would fit those breakpoint widths, I derived three content widths: Breakpoint Columns Content width 768px 7 732px 992px 9 948px 1,382px 13 1,380px Here’s my grid again, this time with pixel measurements and breakpoints overlaid. Showing pixel measurements and breakpoints (Larger image) Now cast your mind back to the fluid grids calculation I made earlier. I divided a 300px element by 948px and arrived at 31.646%. For some elements it’s possible to use that percentage across all screen widths, but others will feel too small in relation to a narrower 768px and too large inside 1,380px. To help maintain connectedness, I make a set of fluid grids calculations based on each of the content widths I established earlier. Now I can shift an element’s percentage width up or down when I switch to a new breakpoint and content width. For example: 300px is 40.984% of 732px 300px is 31.646% of 948px 300px is 21.739% of 1,380px I’ll add all those fluid grid percentages to my grid image and save it for quick reference. Showing percentages at all breakpoints (Larger image) Then I can apply those different percentage widths to elements at each breakpoint using CSS3 media queries. For example, that sidebar division again: /* 732px, 7-column width */ @media only screen and (min-width: 768px) { .content-sub { width : 40.983%; /* 300px ÷ 732px = .40983 */ } } /* 948px, 9-column width */ @media only screen and (min-width: 992px) { .content-sub { width : 31.645%; /* 300px ÷ 948px = .31645 */ } } /* 1380px, 13-column width */ @media only screen and (min-width: 1382px) { .content-sub { width : 21.739%; /* 300px ÷ 1380px = .21739 */ } } The number of changes you make to a layout at different breakpoints will, of course, depend on the specifics of the design you’re working on. Yes, this is additional work, but the result will be a layout that feels better balanced and within which elements remain in harmony with each other while they respond to new screen or device widths. Putting the design in responsive web design Until now, many of the conversations around responsive web design have been about aspects of technical implementation, rather than design. I believe we’re only beginning to understand what’s involved in designing responsively. In future, we’ll likely be making design decisions not just about proportions but also about responsive typography. We’ll also need to learn how to adapt our designs to device characteristics such as touch targets and more. Sometimes we’ll make decisions to improve function, other times because they make a design ‘feel’ right. You’ll know when you’ve made a right decision. You’ll feel it. After all, there really is no formula for making great designs. 2011 Andy Clarke andyclarke 2011-12-23T00:00:00+00:00 https://24ways.org/2011/theres-no-formula-for-great-designs/ ux
272 Crafting the Front-end Much has been spoken and written recently about the virtues of craftsmanship in the context of web design and development. It seems that we as fabricators of the web are finally tiring of seeking out parallels between ourselves and architects, and are turning instead to the fabled specialist artisans. Identifying oneself as a craftsman or craftswoman (let’s just say craftsperson from here onward) will likely be a trend of early 2012. In this pre-emptive strike, I’d like to expound on this movement as I feel it pertains to front-end development, and encourage care and understanding of the true qualities of craftsmanship (craftspersonship). The core values I’ll begin by defining craftspersonship. What distinguishes a craftsperson from a technician? Dictionaries tend to define a craftsperson as one who possesses great skill in a chosen field. The badge of a craftsperson for me, though, is a very special label that should be revered and used sparingly, only where it is truly deserved. A genuine craftsperson encompasses a few other key traits, far beyond raw skill, each of which must be learned and mastered. A craftsperson has: An appreciation of good work, in both the work of others and their own. And not just good as in ‘hey, that’s pretty neat’, I mean a goodness like a shining purity – the kind of good that feels right when you see it. A belief in quality at every level: every facet of the craftsperson’s product is as crucial as any other, without exception, even those normally hidden from view. Vision: an ability to visualize their path ahead, pre-empting the obstacles that may be encountered to plan a route around them. A preference for simplicity: an almost Bauhausesque devotion to undecorated functionality, with no unjustifiable parts included. Sincerity: producing work that speaks directly to its purpose with flawless clarity. Only when you become a custodian of such values in your work can you consider calling yourself a craftsperson. Now let’s take a look at some steps we front-end developers can take on our journey of enlightenment toward craftspersonhood. Speaking of the craftsman’s journey, be sure to watch out for the video of The Standardistas’ stellar talk at the Build 2011 conference titled The Journey, which should be online sometime soon. Building your own toolbox My grandfather was a carpenter and trained as a young apprentice under a master. After observing and practising the many foundation theories, principles and techniques of carpentry, he was tasked with creating his own set of woodworking tools, which he would use and maintain throughout his career. By going through the process of having to create his own tools, he would be connected at the most direct level with every piece of wood he touched, his tools being his own creations and extensions of his own skilled hands. The depth of his knowledge of these tools must have surpassed the intricate as he fathered, used, cleaned and repaired them, day in and day out over many years. And so it should be, ideally, with all crafts. We must understand our tools right down to the most fundamental level. I firmly believe that a level of true craftsmanship cannot be reached while there exists a layer that remains not wholly understood between a creator and his canvas. Of course, our tools as front-end developers are somewhat more complex than those of other crafts – it may seem reasonable to require that a carpenter create his or her own set of chisels, but somewhat less so to ask a front-end developer to code their own CSS preprocessor, or design their own computer. However, it is still vitally important that you understand how your tools work. This is particularly critical when it comes to things like preprocessors, libraries and frameworks which aim to save you time by automating common processes and functions. For the most part, anything that saves you time is a Good Thing™ but it cannot be stressed enough that using tools like these in earnest should be avoided until you understand exactly what they are doing for you (and, to an extent, how they are doing it). In particular, you must understand any drawbacks to using your tools, and any shortcuts they may be taking on your behalf. I’m not suggesting that you steer clear of paid work until you’ve studied each of jQuery’s 9,266 lines of JavaScript source code but, all levity aside, it will further you on your journey to look at interesting or relevant bits of jQuery, and any other libraries you might want to use. Such libraries often directly link to corresponding sections of their source code on sites like GitHub from their official documentation. Better yet, they’re almost always written in high level languages (easy to read), so there’s no excuse not to don your pith helmet and go on something of an exploration. Any kind of tangential learning like this will drive you further toward becoming a true craftsperson, so keep an open mind and always be ready to step out of your comfort zone. Downtime and tool honing With any craft, it is essential to keep your tools in good condition, and a good idea to stay up-to-date with the latest equipment. This is especially true on the web, which, as we like to tell anyone who is still awake more than a minute after asking what it is that we do, advances at a phenomenal pace. A tool or technique that could be considered best practice this week might be the subject of haughty derision in a comment thread within six months. I have little doubt that you already spend a chunk of time each day keeping up with the latest material from our industry’s finest Interblogs and Twittertubes, but do you honestly put aside time to collect bookmarks and code snippets from things you read into a slowly evolving toolbox? At @media in 2009, Simon Collison delivered a candid talk on his ‘Ultimate Package’. Those of us who didn’t flee the room anticipating a newfound and unwelcome intimacy with the contents of his trousers were shown how he maintained his own toolkit – a collection of files and folders all set up and ready to go for a new project. By maintaining a toolkit in this way, he has consistency across projects and a dependable base upon which to learn and improve. The assembly and maintenance of such a personalized and familiar toolkit is probably as close as we will get to emulating the tool making stage of more traditional craft trades. Keep a master copy of your toolkit somewhere safe, making copies of it for new projects. When you learn of a way in which part of it can be improved, make changes to the master copy. Simplicity through modularity I believe that the user interfaces of all web applications should be thought of as being made up primarily of modular components. Modules in this context are patterns in design that appear repeatedly throughout the app. These can be small collections of elements, like a user profile summary box (profile picture, username, meta data), as well as atomic elements such as headings and list items. Well-crafted front-end architectures have the ability to support this kind of repeating pattern as modules, with as close to no repetition of CSS (or JavaScript) as possible, and as close to no variations in HTML between instances as possible. One of the most fundamental and well known tenets of software engineering is the DRY rule – don’t repeat yourself. It requires that “every piece of knowledge must have a single, unambiguous, authoritative representation within a system.” As craftspeople, we must hold this rule dear and apply it to the modules we have identified in our site designs. The moment you commit a second style definition for a module, the quality of your output (the front-end code) takes a huge hit. There should only ever be one base style definition for each distinct module or component. Keep these in a separate, sacred place in your CSS. I use a _modules.scss Sass include file, imported near the top of my main CSS files. Be sure, of course, to avoid making changes to this file lightly, as the smallest adjustment can affect multiple pages (hint: keep a structure list of which modules are used on which pages). Avoid the inevitable temptation to duplicate code late in the project. Sticking to this rule becomes more important the more complex the codebase becomes. If you can stick to this rule, using sensible class names and consistent HTML, you can reach a joyous, self-fulfilling plateau stage in each project where you are assembling each interface from your own set of carefully crafted building blocks. Old school markup Let’s take a step back. Before we fret about creating a divinely pure modular CSS framework, we need to know the site’s design and what it is made of. The best way to gain this knowledge is to go old school. Print out every comp, mockup, wireframe, sketch or whatever you have. If there are sections of pages that are hidden until some user action takes place, or if the page has multiple states, be sure that you have everything that could become visible to the user on paper. Once you have your wedge of paper designs, lay out all the pages on the floor, or stick them to the wall if you can, arranging them logically according to the site hierarchy, by user journey, or whatever guidelines make most sense to you. Once you have the site laid out before you, study it for a while, familiarizing yourself with every part of every interface. This will eliminate nasty surprises late in the project when you realize you’ve duplicated something, or left an interface on the drawing board altogether. Now that you know the site like it’s your best friend, get out your pens or pencils of choice and attack it. Mark it up like there’s no tomorrow. Pretend you’re a spy trying to identify communications from an enemy network hiding their messages in newspapers. Look for patterns and similarities, drawing circles around them. These are your modules. Start also highlighting the differences between each instance of these modules, working out which is the most basic or common type that will become the base definition from which all other representations are extended. This simple but empowering exercise will equip you for your task of actually crafting, instead of just building, the front-end. Without the knowledge gained from this kind of research phase, you will be blundering forward, improvising as best you can, but ultimately making quality-compromising mistakes that could have been avoided. For more on this theme, read Anna Debenham’s Front-end Style Guides which recommends a similar process, and the sublime idea of extending this into a guide to refer to during development and beyond. Design homogeneity Moving forward again, you now have your modules defined and things are looking good. I mentioned that many instances of these modules will carry minor differences. These differences must be given significant thinking time, and discussion time with your designer(s). It should be common knowledge by now that successful software projects are not the product of distinct design and build phases with little or no bidirectional feedback. The crucial nature of the designer-developer relationship has been covered in depth this year by Paul Robert Lloyd, and a joint effort from both teams throughout the project lifecycle is pivotal to your ability to craft and ship successful products. This relationship comes into play when you’re well into the development of the site, and you start noticing these differences between instances of modules (they’ll start to stand out very clearly to you and your carefully regimented modular CSS system). Before you start overriding your base styles, question the differences with the designer to work out why they exist. Perhaps they are required and are important to their context, but perhaps they were oversights from earlier design revisions, or simple mistakes. The craftsperson’s gland As you grow towards the levels of expertise and experience where you can proudly and honestly consider yourself a craftsperson, you will find that you steadily develop what initially feels like a kind of sixth sense. I think of it more as a new hormonal gland, secreting into your bloodstream a powerful messenger chemical that can either reward or punish your brain. This gland is connected directly to your core understanding of what good quality work looks and feels like, an understanding that itself improves with experience. This gland will make itself known to you in two ways. First, when you solve a problem in a beautifully elegant way with clean and unobtrusive code that looks good and just feels right, your craftsperson’s gland will ooze something delicious that makes your brain and soul glow from the inside out. You will beam triumphantly at the succinct lines of code on your computer display before bounding outside with a spring in your step to swim up glittering rainbows and kiss soft fluffy puppies. The second way that you may become aware of your craftsperson’s gland, though, is somewhat less pleasurable. In an alternate reality, your parallel self is faced with the same problem, but decides to take a shortcut and get around it by some dubious means – the kind of technical method that the words hack, kludge and bodge are reserved for. As soon as you have done this, or even as you are doing it, your craftsperson’s gland will damn well let you know that you took the wrong fork in the road. As your craftsperson’s gland begins to secrete a toxic pus, you will at first become entranced into a vacant stare at the monstrous mess you are considering unleashing upon your site’s visitors, before writhing in the horrible agony of an itch that can never be scratched, and a feeling of being coated with the devil’s own deep and penetrating filth that no shower will ever cleanse. Perhaps I exaggerate slightly, but it is no overstatement to suggest that you will find yourself being guided by proverbial angels and demons perched on opposite shoulders, or a whispering voice inside your head. If you harness this sense, sharpening it as if it were another tool in your kit and letting it guide or at least advise your decision making, you will transcend the rocky realm of random trial and error when faced with problems, and tend toward the right answers instinctively. This gland can also empower your ability to assess your own work, becoming a judge before whom all your work is cross-examined. A good craftsperson regularly takes a step back from their work, and questions every facet of their product for its precise alignment with their core values of quality and sincerity, and even the very necessity of each component. The wrapping By now, you may be thinking that I take this kind of thing far too seriously, but to terrify you further, I haven’t even shared the half of it. Hopefully, though, this gives you an idea of the kind of levels of professionalism and dedication that it should take to get you on your way to becoming a craftsperson. It’s a level of accomplishment and ability toward which we all should strive, both for our personal fulfilment and the betterment of the products we use daily. I look forward to seeing your finely crafted work throughout 2012. 2011 Ben Bodien benbodien 2011-12-24T00:00:00+00:00 https://24ways.org/2011/crafting-the-front-end/ process
271 Creating Custom Font Stacks with Unicode-Range Any web designer or front-end developer worth their salt will be familiar with the CSS @font-face rule used for embedding fonts in a web page. We’ve all used it — either directly in our code ourselves, or via one of the web font services like Fontdeck, Typekit or Google Fonts. If you’re like me, however, you’ll be used to just copying and pasting in a specific incantation of lines designed to get different formats of fonts working in different browsers, and may not have really explored all the capabilities of @font-face properties as defined by the spec. One such property — the unicode-range descriptor — sounds pretty dull and is easily overlooked. It does, however, have some fairly interesting possibilities when put to use in creative ways. Unicode-range The unicode-range descriptor is designed to help when using fonts that don’t have full coverage of the characters used in a page. By adding a unicode-range property to a @font-face rule it is possible to specify the range of characters the font covers. @font-face { font-family: BBCBengali; src: url(fonts/BBCBengali.ttf) format("opentype"); unicode-range: U+00-FF; } In this example, the font is to be used for characters in the range of U+00 to U+FF which runs from the unexciting control characters at the start of the Unicode table (symbols like the exclamation mark start at U+21) right through to ÿ at U+FF – the extent of the Basic Latin character range. By adding multiple @font-face rules for the same family but with different ranges, you can build up complete coverage of the characters your page uses by using different fonts. When I say that it’s possible to specify the range of characters the font covers, that’s true, but what you’re really doing with the unicode-range property is declaring which characters the font should be used for. This becomes interesting, because instead of merely working with the technical constraints of available characters in a given font, we can start picking and choosing characters to use and selectively mix fonts together. The best available ampersand A few years back, Dan Cederholm wrote a post encouraging designers to use the best available ampersand. Dan went on to outline how this can be achieved by wrapping our ampersands in a <span> element with a class applied: <span class="amp">&</span> A CSS rule can then be written to select the <span> and apply a different font: span.amp { font-family: Baskerville, Palatino, "Book Antiqua", serif; } That’s a perfectly serviceable technique, but the drawbacks are clear — you have to add extra markup which is borderline presentational, and you also have to be able to add that markup, which isn’t always possible when working with a CMS. Perhaps we could do this with unicode-range. A better best available ampersand The Unicode code point for an ampersand is U+26, so the ampersand font stack above can be created like so: @font-face { font-family: 'Ampersand'; src: local('Baskerville'), local('Palatino'), local('Book Antiqua'); unicode-range: U+26; } What we’ve done here is specify a new family called Ampersand and created a font stack for it with the user’s locally installed copies of Baskerville, Palatino or Book Antiqua. We’ve then limited it to a single character range — the ampersand. Of course, those don’t need to be local fonts — they could be web font files, too. If you have a font with a really snazzy ampersand, go for your life. We can then use that new family in a regular font stack. h1 { font-family: Ampersand, Arial, sans-serif; } With this in place, any <h1> elements in our page will use the Ampersand family (Baskerville, Palatino or Book Antiqua) for ampersands, and Arial for all other characters. If the user doesn’t have any of the Ampersand family fonts available, the ampersand will fall back to the next item in the font stack, Arial. You didn’t think it was that easy, did you? Oh, if only it were so. The problem comes, as ever, with the issue of browser support. The unicode-range property has good support in WebKit browsers (like Safari and Chrome, and the browsers on most popular smartphone platforms) and in recent versions of Internet Explorer. The big stumbling block comes in the form of Firefox, which has no support at all. If you’re familiar with how CSS works when it comes to unsupported properties, you’ll know that if a browser encounters a property it doesn’t implement, it just skips that declaration and moves on to the next. That works perfectly for things like border-radius — if the browser can’t round off the corners, the declaration is skipped and the user sees square corners instead. Perfect. Less perfect when it comes to unicode-range, because if no range is specified then the default is that the font is applied for all characters — the whole range. If you’re using a fancy font for flamboyant ampersands, you probably don’t want that applied to all your text if unicode-range isn’t supported. That would be bad. Really bad. Ensuring good fallbacks As ever, the trick is to make sure that there’s a sensible fallback in place if a browser doesn’t have support for whatever technology you’re trying to use. This is where being a super nerd about understanding the spec you’re working with really pays off. We can make use of the rules of the CSS cascade to make sure that if unicode-range isn’t supported we get a sensible fallback font. What would be ideal is if we were able to follow up the @font-face rule with a second rule to override it if Unicode ranges aren’t implemented. @font-face { font-family: 'Ampersand'; src: local('Baskerville'), local('Palatino'), local('Book Antiqua'); unicode-range: U+26; } @font-face { font-family: 'Ampersand'; src: local('Arial'); } In theory, this code should make sense for all browsers. For those that support unicode-range the two rules become cumulative. They specify different ranges for the same family, and in WebKit browsers this has the expected result of using Arial for most characters, but Baskerville and friends for the ampersand. For browsers that don’t have support, the second rule should just supersede the first, setting the font to Arial. Unfortunately, this code causes current versions of Firefox to freak out and use the first rule, applying Baskerville to the entire range. That’s both unexpected and unfortunate. Bad Firefox. On your rug. If that doesn’t work, what can we do? Well, we know that if given a unicode-range Firefox will ignore the range and apply the font to all characters. That’s really what we’re trying to achieve. So what if we specified a range for the fallback font, but made sure it only covers some obscure high-value Unicode character we’re never going to use in our page? Then it wouldn’t affect the outcome for browsers that do support ranges. @font-face { font-family: 'Ampersand'; src: local('Baskerville'), local('Palatino'), local('Book Antiqua'); unicode-range: U+26; } @font-face { /* Ampersand fallback font */ font-family: 'Ampersand'; src: local('Arial'); unicode-range: U+270C; } By specifying a range on the fallback font, Firefox appears to correctly override the first based on the cascade sort order. Browsers that do support ranges take the second rule in addition, and apply Arial for that obscure character we’re not using in any of our pages — U+270C. So we get our nice ampersands in browsers that support unicode-range and, thanks to our styling of an obscure Unicode character, the font falls back to a perfectly acceptable Arial in browsers that do not offer support. Perfect! That obscure character, my friends, is what Unicode defines as the VICTORY HAND. ✌ So, how can we use this? Ampersands are a neat trick, and it works well in browsers that support ranges, but that’s not really the point of all this. Styling ampersands is fun, but they’re only really scratching the surface. Consider more involved examples, such as substituting a different font for numerals, or symbols, or even caps. Things certainly begin to get a bit more interesting. How do you know what the codes are for different characters? Richard Ishida has a handy online conversion tool available where you can type in the characters and get the Unicode code points out the other end. Of course, the fact remains that browser support for unicode-range is currently limited, so any application needs to have fallbacks that you’re still happy for a significant proportion of your visitors to see. In some cases, such as dedicated pages for mobile devices in an HTML-based phone app, this is immediately useful as support in WebKit browsers is already very good. In other cases, you’ll have to use your own best judgement based on your needs and audience. One thing to keep in mind is that if you’re using web fonts, the entire font will be downloaded even if only one character is used. That said, the font shouldn’t be downloaded if none of the characters within the Unicode range are present in a given page. As ever, there are pros and cons to using unicode-range as well as varied but increasing support in browsers. It remains a useful tool to understand and have in your toolkit for when the right moment comes along. 2011 Drew McLellan drewmclellan 2011-12-01T00:00:00+00:00 https://24ways.org/2011/creating-custom-font-stacks-with-unicode-range/ code
270 From Side Project to Not So Side Project In the last article I wrote for 24 ways, back in 2009, I enthused about the benefits of having a pet project, suggesting that we should all have at least one so that we could collaborate with our friends, escape our day jobs, fulfil our own needs, help others out, raise our profiles, make money, and — most importantly — have fun. I don’t think I need to offer any further persuasions: it seems that designers and developers are launching their own pet projects left, right and centre. This makes me very happy. However, there still seems to be something of a disconnect between having a side project and turning it into something that is moderately successful; in particular, the challenge of making enough money to sustain the project and perhaps even elevating it from the sidelines so that it becomes something not so on the side at all. Before we even begin this, let’s spend a moment talking about money, also known as… Evil, nasty, filthy money Over the last couple of years, I’ve started referring to myself as an accidental businessman. I say accidental because my view of the typical businessman is someone who is driven by money, and I usually can’t stand such people. Those who are motivated by profit, obsessed with growth, and take an active interest in the world’s financial systems don’t tend to be folks with whom I share a beer, unless it’s to pour it over them. Especially if they’re wearing pinstriped suits. That said, we all want to make money, don’t we? And most of us want to make a relatively decent amount, too. I don’t think there’s any harm in admitting that, is there? Hello, I’m Elliot and I’m a capitalist. The key is making money from doing what we love. For most people I know in our community, we’ve already achieved that — I’m hard-pressed to think of anyone who isn’t extremely passionate about working in our industry and I think it’s one of the most positive, unifying benefits we enjoy as a group of like-minded people — but side projects usually arise from another kind of passion: a passion for something other than what we do as our day jobs. Perhaps it’s because your clients are driving you mental and you need a break; perhaps it’s because you want to create something that is truly your own; perhaps it’s because you’re sick of seeing your online work disappear so fast and you want to try your hand at print in order to make a more permanent mark. The three factors I listed there led me to create 8 Faces, a printed magazine about typography that started as a side project and is now a very significant part of my yearly output and income. Like many things that prove fruitful, 8 Faces’ success was something of an accident, too. For a start, the magazine was never meant to be profitable; its only purpose at all was to scratch my own itch. Then, after the first issue took off and I realized how much time I needed to spend in order to make the next one decent, it became clear that I would have to cover more than just the production costs: I’d have to take time out from client work as well. Doing this meant I’d have to earn some money. Probably not enough to equate to the exact amount of time lost when I could be doing client work (not that you could ever describe time as being lost when you work on something you love), but enough to survive; for me to feel that I was getting paid while doing all of the work that 8 Faces entailed. The answer was to raise money through partnerships with some cool companies who were happy to be associated with my little project. A sustainable business model Business model! I can’t believe I just wrote those words! But a business model is really just a loose plan for how not to screw up. And all that stuff I wrote in the paragraph above about partnering with companies so I could get some money in while I put the magazine together? Well, that’s my business model. If you’re making any product that has some sort of production cost, whether that’s physical print run expenses or up-front dev work to get an app built, covering those costs before you even release your product means that you’ll be in profit from the first copy you sell. This is no small point: production expenses are pretty much the only cost you’ll ever need to recoup, so having them covered before you launch anything is pretty much the best possible position in which you could place yourself. Happy days, as Jamie Oliver would say. Obtaining these initial funds through partnerships has another benefit. Sure, it’s a form of advertising but, done right, your partners can potentially provide you with great content, too. In the case of 8 Faces, the ads look as nice as the rest of the magazine, and a couple of our partners also provide proper articles: genuinely meaningful, relevant, reader-pleasing articles at that. You’d be amazed at how many companies are willing to become partners and, as the old adage goes, if you don’t ask, you don’t get. With profit comes responsibility Don’t forget about the responsibility you have to your audience if you engage in a relationship with a partner or any type of advertiser: although I may have freely admitted my capitalist leanings, I’m still essentially a hairy hippy, and I feel that any partnership should be good for me as a publisher, good for the partner and — most importantly — good for the reader. Really, the key word here is relevance, and that’s where 99.9% of advertising fails abysmally. (99.9% is not a scientific figure, but you know what I’m on about.) The main grey area when a side project becomes profitable is how you share that profit, partly because — in my opinion, at least — the transition from non-profitable side project to relatively successful source of income can be a little blurred. Asking for help for nothing when there’s no money to be had is pretty normal, but sometimes it’s easy to get used to that free help even once you start making money. I believe the best approach is to ask for help with the promise that it will always be rewarded as soon as there’s money available. (Oh, god: this sounds like one of those nightmarish client proposals. It’s not, honest.) If you’re making something cool, people won’t mind helping out while you find your feet. Events often think that they’re exempt from sharing profit. Perhaps that’s because many event organizers think they’re doing the speakers a favour rather than the other way around (that’s a whole separate article), but it’s shocking to see how many people seem to think they can profit from content-makers — speakers, for example — and yet not pay for that content. It was for this reason that Keir and I paid all of our speakers for our Insites: The Tour side project, which we ran back in July. We probably could’ve got away without paying them, especially as the gig was so informal, but it was the right thing to do. In conclusion: money as a by-product Let’s conclude by returning to the slightly problematic nature of money, because it’s the pivot on which your side project’s success can swing, regardless of whether you measure success by monetary gain. I would argue that success has nothing to do with profit — it’s about you being able to spend the time you want on the project. Unfortunately, that is almost always linked to money: money to pay yourself while you work on your dream idea; money to pay for more servers when your web app hits the big time; money to pay for efforts to get the word out there. The key, then, is to judge success on your own terms, and seek to generate as much money as you see fit, whether it’s purely to cover your running costs, or enough to buy a small country. There’s nothing wrong with profit, as long as you’re ethical about it. (Pro tip: if you’ve earned enough to buy a small country, you’ve probably been unethical along the way.) The point at which individuals and companies fail — in the moral sense, for sure, but often in the competitive sense, too — is when money is the primary motivation. It should never be the primary motivation. If you’re not passionate enough about something to do it as an unprofitable side project, you shouldn’t be doing it all. Earning money should be a by-product of doing what you love. And who doesn’t want to spend their life doing what they love? 2011 Elliot Jay Stocks elliotjaystocks 2011-12-22T00:00:00+00:00 https://24ways.org/2011/from-side-project-to-not-so-side-project/ business
269 Adaptive Images for Responsive Designs… Again When I was asked to write an article for 24 ways I jumped at the chance, as I’d been wanting to write about some fun hacks for responsive images and related parsing behaviours. My heart sank a little when Matt Wilcox beat me to the subject, but it floated back up when I realized I disagreed with his method and still had something to write about. So, Matt Wilcox, if that is your real name (and I’m pretty sure it is), I disagree. I see your dirty server-based hack and raise you an even dirtier client-side hack. Evil laugh, etc., etc. You guys can stomach yet another article about responsive design, right? Right? Half the room gets up to leave Whoa, whoa… OK, I’ll cut to the chase… TL;DR In a previous episode, we were introduced to Debbie and her responsive cat poetry page. Well, now she’s added some reviews of cat videos and some images of cats. Check out her new page and have a play around with the browser window. At smaller widths, the images change and the design responds. The benefits of this method are: it’s entirely client-side images are still shown to users without JavaScript your media queries stay in your CSS file no repetition of image URLs no extra downloads per image it’s fast enough to work on resize it’s pure filth What’s wrong with the server-side solution? Responsive design is a client-side issue; involving the server creates a boatload of problems. It sets a cookie at the top of the page which is read in subsequent requests. However, the cookie is not guaranteed to be set in time for requests on the same page, so the server may see an old value or no value at all. Serving images via server scripts is much slower than plain old static hosting. The URL can only cache with vary: cookie, so the cache breaks when the cookie changes, even if the change is unrelated. Also, far-future caching is out for devices that can change width. It depends on detecting screen width, which is rather messy on mobile devices. Responding to things other than screen width (such as DPI) means packing more information into the cookie, and a more complicated script at the top of each page. So, why isn’t this straightforward on the client? Client-side solutions to the problem involve JavaScript testing user agent properties (such as screen width), looping through some images and setting their URLs accordingly. However, by the time JavaScript has sprung into action, the original image source has already started downloading. If you change the source of an image via JavaScript, you’re setting off yet another request. Images are downloaded as soon as their DOM node is created. They don’t need to be visible, they don’t need to be in the document. new Image().src = url The above will start an HTTP request for url. This is a handy trick for quick requests and preloading, but also shows the browser’s eagerness to download images. Here’s an example of that in action. Check out the network tab in Web Inspector (other non-WebKit development aids are available) to see the image requests. Because of this, some client-side solutions look like this: <img src="t.gif" data-src="real-image.jpg" data-bigger-src="real-bigger-image.jpg"> where t.gif is a 1×1px tiny transparent GIF. This results in no images if JavaScript isn’t available. Dealing with the absence of JavaScript is still important, even on mobile. I was recently asked to make a website work on an old Blackberry 9000. I was able to get most of the way there by preventing that OS parsing any JavaScript, and that was only possible because the site didn’t depend on it. We need to delay loading images for JavaScript users, but ensure they load for users without JavaScript. How can we conditionally parse markup depending on JavaScript support? Oh yeah! <noscript>! <noscript> <img src="image.jpg"> </noscript> Whoa! First spacer GIFs and now <noscript>? This really is the future! The image above will only load for users without JavaScript support. Now all we need to do is send JavaScript in there to get the textContent of the <noscript> element, then we can alter the image source before handing it to the DOM for parsing. Here’s an example of that working … unless you’re using Internet Explorer. Internet Explorer doesn’t retain the content of <noscript> elements. As soon as it’s parsed, it considers it an empty element. FANKS INTERNET EXPLORER. This is why some solutions do this: <noscript data-src="image.jpg"> <img src="image.jpg"> </noscript> so JavaScript can still get at the URL via the data-src attribute. However, repeating stuff isn’t great. Surely we can do better than that. A dirty, dirty hack Thankfully, I managed to come up with a solution, and by me, I mean someone cleverer than me. Pornel’s solution uses <noscript>, but surely we don’t need that. Now, before we look at this, I can’t stress how dirty it is. It’s so dirty that if you’ve seen it, schools will refuse to employ you. <script>document.write('<' + '!--')</script> <img src="image.jpg"> <!----> Phwoar! Dirty, isn’t it? I’ll stop for a moment, so you can go have a wash. Done? Excellent. With this, the image is wrapped in a comment only for users with JavaScript. Without JavaScript, we get the image. Unlike the <noscript> example above, we can get the text content of the comment pretty easily. Hurrah! But wait… Some browsers are sometimes downloading the image, even with JavaScript enabled. Notably Firefox. Huh? Images are downloaded in comments now? What? No. What we’re seeing here is the effect of speculative parsing. Here’s what’s happening: While the browser is parsing the script, it parses the rest of the document. This is usually a good thing, as it can download subsequent images and scripts without waiting for the script to complete. The problem here is we create an unbalanced tree. An unbalanced tree, yesterday. In this case, the browser must throw away its speculative parsing and reparse from the end of the <script> element, taking our document.write into consideration. Unfortunately, by this stage it may have already discovered the image and sent an HTTP request for it. A dirty, dirty hack… that works Pornel was right: we still need the <noscript> element to cater for browsers with speculative parsing. <script>document.write('<' + '!--')</script><noscript> <img src="image.jpg"> </noscript --> And there we have it. We can now prevent images loading for users with JavaScript, but we can still get at the markup. We’re still creating an unbalanced tree and there’s a performance impact in that. However, the parser won’t have got far by the time our script executes, so the impact is small. Unbalanced trees are more of a concern for external scripts; a lot of parsing can happen by the time the script has downloaded and parsed. Using dirtiness to create responsive images Now all we need to do is give each of our dirty scripts a class name, then JavaScript can pick them up, grab the markup from the comment and decide what to do with the images. This technique isn’t exclusively useful for responsive images. It could also be used to delay images loading until they’ve scrolled into view. But to do that you’ll need a bulletproof way of detecting when elements are in view. This involves getting the height of the viewport, which is extremely unreliable on mobile devices. Here’s a hastily thrown together example showing how it can be used for responsive images. I adjust the end of the image URLs conditionally depending on the result of media queries. This is done on page load, and on resize. I’m using regular expressions to alter the URLs. Using regex to deal with HTML is usually a sign of insanity, but parsing it with the browser’s DOM parser would trigger the download of images before we change the URLs. My implementation currently requires double-quoted image URLs, because I’m lazy. Wanna fight about it? Media querying via JavaScript Jeremy Keith used document.documentElement.clientWidth in his example, which is great as a proof of concept, but unfortunately is rather unreliable across mobile devices. Thankfully, standards are coming to the rescue with window.matchMedia, which lets us provide a media query string and get a boolean result. There’s even a great polyfill for browsers that don’t support it (as long as they support media queries in CSS). I didn’t go with that for three reasons: I’d like to keep media queries in the CSS file, if possible. I wanted something a little lighter to keep things speedy while resizing. It’s just not dirty enough yet. To make things ultra-dirty, I add a test element to the page with a specific class, let’s say media-test. Then, I control the width of it using media queries in my CSS file: @media all and (min-width: 640px) { .media-test { width: 1px; } } @media all and (min-width: 926px) { .media-test { width: 2px; } } The JavaScript part changes the URL suffix depending on the width of media-test. I’m using a min-width media query, but you can use others such as pixel-ratio to detect high DPI displays. Basically, it’s a hacky way for CSS to set a value that can be picked up by JavaScript. It means the bit that signals changes to the images sits with the rest of the responsive code, without duplication. Also, phwoar, dirty! The API I threw a script together to demonstrate the technique. I’m not particularly attached to it, I’m not even sure I like it, but here’s the API: responsiveGallery({ // Class name of dirty script element(s) to target scriptClass: 'dirty-gallery-script', // Class name for our test element testClass: 'dirty-gallery-test', // The initial suffix of URLs, the bit that changes. initialSuffix: '-mobile.jpg', // A map of suffixes, for each width of 'dirty-gallery-test' suffixes: { '1': '-desktop.jpg', '2': '-large-desktop.jpg', '3': '-mobile-retina.jpg' } }); The API can cover individual images or multiple galleries at once. In the example I gave at the start of the article I make two calls to the API, one for both galleries, and one for the single image above the video reviews. They’re separate calls because they respond slightly differently. The future Hopefully, we’ll get a proper solution to this soon. My favourite suggestion is the <picture> element that Bruce Lawson covers. <picture alt="Angry pirate"> <source src="hires.png" media="min-width:800px"> <source src="midres.png" media="min-width:480px"> <source src="lores.png"> <!-- fallback for browsers without support --> <img src="midres.png" alt="Angry pirate"> </picture> Unfortunately, we’re nowhere near that yet, and I’d still rather have my media queries stay in CSS. Perhaps the source elements could be skipped if they’re display:none; then they could have class names and be controlled via CSS. Sigh. Well, I’m tired of writing now and I’m sure you’re tired of reading. I realize what I’ve presented is a yet another dirty hack to the responsive image problem (perhaps the dirtiest?) and may be completely unfeasible in professional situations. But isn’t that the true spirit of Christmas? No. 2011 Jake Archibald jakearchibald 2011-12-08T00:00:00+00:00 https://24ways.org/2011/adaptive-images-for-responsive-designs-again/ ux
268 Getting the Most Out of Google Analytics Something a bit different for today’s 24 ways article. For starters, I’m not a designer or a developer. I’m an evil man who sells things to people on the internet. Second, this article will likely be a little more nebulous than you’re used to, since it covers quite a number of points in a relatively short space. This isn’t going to be the complete Google Analytics Conversion University IQ course compressed into a single article, obviously. What it will be, however, is a primer on setting up and using Google Analytics in real life, and a great deal of what I’ve learned using Google Analytics nearly every working day for the past six (crikey!) years. Also, to be clear, I’ll be referencing new Google Analytics here; old Google Analytics is for loooosers (and those who want reliable e-commerce conversion data per site search term, natch). You may have been running your Analytics account for several years now, dipping in and out, checking traffic levels, seeing what’s popular… and that’s about it. Google Analytics provides so much more than that, but the number of reports available can often intimidate users, and documentation and case studies on their use are minimal at best. Let’s start! Setting up your Analytics profile Before we plough on, I just want to run through a quick checklist that some basic settings have been enabled for your profile. If you haven’t clicked it, click the big cog on the top-right of Google Analytics and we’ll have a poke about. If you have an e-commerce site, e-commerce tracking has been enabled
 If your site has a search function, site search tracking has been enabled. Query string parameters that you do not want tracked as separate pages have been excluded (for example, any parameters needed for your platform to function, otherwise you’ll get multiple entries for the same page appearing in your reports) Filters have been enabled on your main profile to exclude your office IP address and any IPs of people who frequently access the site for work purposes. In decent numbers they tend to throw data off a tad.
 You may also find the need to set up multiple profiles prefiltered for specific audience segments. For example, at Lovehoney we have seventeen separate profiles that allow me quick access to certain countries, devices and traffic sources without having to segment first. You’ll also find load time for any complex reports much improved. Use the same filter screen as above to set up a series of profiles that only include, say, mobile visits, or UK visitors, so you can quickly analyse important segments. Matt, what’s a segment? A segment is a subsection of your visitor base, which you define and then call on in reports to see specific data for that subsection. For example, in this report I’ve defined two segments, the first for IE6 users and the second for IE7. Segments are easily created by clicking the Advanced Segments tabs at the top of any report and clicking +New Custom Segment. What does your site do? Understanding the goals of your site is an oft-covered topic, but it’s necessary not just to form a better understand of your business and prioritize your time. Understanding what you wish visitors to do on your site translates well into a goal-driven analytics package like Google Analytics. Every site exists essentially to sell something, either financially through e-commerce, or to sell an idea or impart information, get people to download a CV or enquire about service, or to sell space on that website to advertisers. If the site did not provide a positive benefit to its owners, it would not have a reason for being. Once you have understood the reason why you have a site, you can map that reason on to one of the three goal types Google Analytics provides. E-commerce This conversion type registers transactions as part of a sales process which requires a monetary value, what products have been bought, an SKU (stock keeping unit), affiliation (if you’re then attributing the sale to a third party or franchise) and so on. The benefit of e-commerce tracking is not only assigning non-arbitrary monetary value to behaviour of visitors on your site, as well as being able to see ancillary costs such as shipping, but seeing product-level information, like which products are preferred from various channels, popular categories, and so on. However, I find the e-commerce tracking options also useful for non-e-commerce sites. For example, if you’re offering downloads or subscriptions and having an email address or user’s details is worth something to you, you can set up e-commerce tracking to understand how much value your site is bringing. For example, an email address might be worth 20p to you, but if it also includes a name it’s worth 50p. A contact telephone number is worth £2, and so on. Page goals Page goals, unsurprisingly, track a visit to a page (often with a sequence of pages leading up to that page). This is what’s referred to as a goal funnel, and is generally used to track how visitors behave in a multistep checkout. Interestingly, the page doesn’t have to actually exist. For example, if you have a single page checkout, you can register virtual page views using trackPageview() when a visitor clicks into a particular section of the checkout or other form. If your site is geared towards getting someone to a particular page, but where there isn’t a transaction (for example, a subscription page) this is for you. There are also behavioural goals, such as time on site and number of pages viewed, which are geared towards sites that make money from advertising. But, going back to the page goals, these can be abstracted using regular expressions, meaning that you can define a funnel based on page type rather than having to set individual folders. In this example, I’ve created regexes for the main page types on my site, so I can create a wide funnel that captures visitors from where they enter through to checkout. Events Event tracking registers a predefined event, such as playing a video, or some interaction that can trigger JavaScript, such as a Tweet This button. Events can then be triggered using the trackEvent() call. If you want someone to complete watching a video, you would code your player to fire trackEvent() upon completion. While I don’t use events as goals, I use events elsewhere to see how well a video play aids to conversion. This not only helps me justify the additional spend on creating video content, but also quickly highlights which videos are underperforming as sales tools. What a visitor can tell you 
Now you have some proper goals set up, we can start to see how changes in content (on-site and external) affect those goals. Ultimately, when a visitor comes to your site, they bring information with them: where they came from (a search engine – including: keyword searched for; a referral; direct; affiliate; or ad campaign) demographics (country; whether they’re new or returning, within thirty days) technical information (browser; screen size; device; bandwidth) site-specific information (landing page; next click; previous values assigned to them as custom variables*) * A note about custom variables. There’s no hope in hell that I can cover custom variables in this article. Go research them. Custom variables are the single best way to hack Google Analytics and bend it to your will. Custom variables allow you to record anything you want about a visitor, which that visitor will then carry around with them between visits. It’s also great for plugging other services into Google Analytics (as shown by the marvelous way Visual Website Optimizer allows you to track and segment tests within the GA interface). Just make sure not to breach the terms of service, eh? CSI your website Police procedural TV shows are all the same: the investigators are called to a crime and come across a clue; there’s then an autopsy; new evidence leads them to a new location; they find a new clue; they put two and two together; they solve the mystery. This is your life now. Exciting! So, now you’re gathering a wealth of information about what sort of people visit your site, what they do when they’re there, and what eventually gets them to drive value to you. It’s now your job to investigate all these little clues to see which types of people drive the most value, and what you can change to improve it. Maybe not that exciting. However, Google Analytics comes pre-armed with extensive reports for you to delve into. As an e-commerce guy (as opposed to a page goal guy) my day pretty much follows the pattern below. Look at e-commerce conversion rate by traffic source compared to the same day in the previous week and previous month. As ours is an e-commerce site, we have weekly and monthly trends. A big spike on Sundays and Mondays, and payday towards the end of the month is always good; on the third week of a month there tends to be a lull. Spend time letting your Google Analytics data brew, understand your own trends and patterns, and you’ll start to get a feel for when something isn’t quite right. Traffic Sources → Sources → All Traffic Look at the conversion rate by landing page for any traffic source that feels significantly different to what’s expected. Check bounce rates, drill down to likely landing pages and check search keyword or referral site to see if it’s a particular subset of visitor. You can do this by clicking Secondary Dimension and choosing Keyword or Source. If it’s direct, choose Visitor Type to break down by new or returning visitor. Content → Site Content → Landing Pages I then tend to flip into Content Drilldown to see what the next clicks were from those landing pages, and whether they changed significantly to the date I’m comparing with. If they have, that’s usually an indicator of changed content (or its relevancy). Remember, if a bunch of people have found their way to your page via a method you’re not expecting (such as a mention on a Spanish radio station – this actually happened to me once), while the content hasn’t changed, the relevancy of it to the audience may have. Content → Site Content → Content Drilldown Once I have an idea of what content was consumed, and whether it was relevant to the user, I then look at the visitor specifics, such as browser or demographic data, to see again whether the change was limited to a specific subset. Site speed, for example, is normally a good factor towards bounce rate, so compare that with previous data as well. Now, to be investigating at this level you still need a serious amount of data, in order to tell what’s a significant change or not. If you’re struggling with a small number of visitors, you might find reporting on a weekly or fortnightly basis more appropriate. However, once you’ve looked into the basics of why changes happen to the value of your site, you’ll soon find yourself limited by the reports offered in Standard Reporting. So, it’s time to build your own. Hooray! Custom reporting Google Analytics provides the tools to build reports specific to the types of investigations you frequently perform. Welcome to my world. Custom reports are quite simple to build: first, you determine the metric you want the report to cover (number of visitors, bounce rate, conversion rate, and so on), then choose a set of dimensions that you’d like to segment the report by (say, the source of the traffic, and whether they were new or returning users). You can filter the report, including or excluding particular dimension values, and you can assign the report to any of the profiles you created earlier. In the example below, I’ve created a report that shows me visits and conversion rate for any Google traffic that landed directly only on a product page. I can then drill down on each product page to see the complete phrases use to search. I can use this information in two ways: I can see which products aren’t converting, which shows me where I need to work harder on merchandising. I can give this information to my content team, showing them the actual phrases visitors used to reach our product content, helping them write better targeted product descriptions. The possibilities here are nearly endless, but here are a few examples of reports I find useful: Non-brand inbound search By creating a report that shows inbound search traffic which doesn’t include your brand, you can see more clearly the behaviour of visitors most likely to be unfamiliar with your site and brand values, without having to rely on the clumsy new or returning demographic date. Traffic/conversion/sales by hour This is pure stats porn, but actually more useful than real-time data. By seeing this data broken down at an hourly level, you can not only compare the current day to previous days, but also see the best performing times for email broadcasts and tweets. Visits, load time, conversion and sales by page and browser Page speed can often kill conversion rates, but it’s difficult to prove the value of focusing on speed in monetary terms. Having this report to hand helps me drive Operation Greenbelt, our effort to get into the sub-1.5 second band in Google Webmaster Tools. Useful things you can’t do in custom reporting If you have a search function on your website, then Conversion Rate and Products Bought by Site Search Term is an incredibly useful report that allows you to measure the effectiveness of your site’s search engine at returning products and content related to the search term used. By including the products actually bought by visitors who searched for each term, you can use this information to better searchandise these results, escalating high propensity and high value products to the top of the results. However, it’s not possible to get this information out of new Google Analytics. Try it, select the following in the report builder: Metrics: total unique searches; e-commerce or goal conversion rate Dimensions: search term; product You’ll see that the data returned is a little nonsensical, though a 2,000% conversion rate would be nice. However, you can get more accurate information using advanced segments. By creating individual segments to define users who have searched for a particular term, you can run the sales performance and product performance reports as normal. It’s laborious, but it teaches a good lesson: data that seems inaccessible can normally be found another way! Reporting infrastructure Now that you have a series of reports that you can refer to on a daily or weekly basis, it’s time to put together a regular reporting infrastructure. Even if you’re not reporting to someone, having a set of key performance indicators that you can use to see how your performance is improving over time allows you to set yourself business goals on a monthly and annual basis. For my own reporting, I take some high-level metrics (such as visitors, conversion rate and average order value), and segment them by traffic source and, separately, landing page. These statistics I record weekly and report: current week compared with previous week same week previous year (if available) 4 week average 13 week average 52 week average (if available) This takes into account weekly, monthly, seasonal and annual trends, and gives you a much clearer view of your performance. Getting data in other ways If you’re using Google Analytics frequently, with any large site you’ll come to a couple of conclusions: Doing any kind of practical comparative analysis is unwieldy. Boy, Google Analytics is slow! As you work with bigger datasets and put together more complex queries, you’ll see the loading graphic more than you’ll see actual data. So when you reach that level, there are ways to completely bypass the Google Analytics interface altogether, and get data into your own spreadsheet application for manipulation. Data Feed Query Explorer If you just want to pull down some quick statistics but still use complex filters and exotic metric and dimension combinations, the Data Feed Query Explorer is the quickest way of doing so. Authenticate with your Google Analytics account, select a profile, and you can start selecting metrics and dimensions to be generated in a handy, selectable tabulated format. Google Analytics API If you’re feeling clever, you can bypass having to copy and paste data by pulling in directly into Excel, Google Docs or your own application using the Google Analytics API. There are several scripts and plugins available to do this. I use Automate Analytics Google Docs code (there’s also a paid version that simplifies setup and creates some handy reports for you). New shiny things Well, now that that’s over, I can show you some cool stuff. Well, at least it’s cool to me. Google Analytics is being constantly improved and new functionality is introduced nearly every month. Here are a couple of my favourites. Multichannel attribution Not every visitor converts on your site on the first visit. They may not even do so on the second visit, or third. If they convert on the fourth visit, but each time they visit they do so via a different channel (for example, Search PPC, Search Organic, Direct, Email), which channel do you attribute the conversion to? The last channel, or the first? Dilemma! Google now has a Multichannel Attribution report, available in the Conversion category, which shows how each channel assists in converting, the overlap between channels, and where in the process that channel was important. For example, you may have analysed your blog traffic from Twitter and become disheartened that not many people were subscribing after visiting from Twitter links, but instead your high-value subscribers were coming from natural search. On the face of it, you’d spend less time tweeting, but a multichannel report may tell you that visitors first arrived via a Twitter link and didn’t subscribe, but then came back later after searching for your blog name on Google, after which they did. Don’t pack Twitter in yet! Visitor and goal flow Visitor and goal flow are amazing reports that help you visualize the flow of traffic through your site and, ultimately, into your checkout funnel or similar goal path. Flow reports are perfect for understanding drop-off points in your process, as well as what the big draws are on each page. Previously, if you wanted to visualize this data you had to set up several abstracted microgoals and chain them together in custom reports. Frankly, it was a pain in the arse and burned through your precious and limited goal allocation. Visitor flow bypasses all that and produces the report in an interactive flow diagram. While it doesn’t show you the holy grail of conversion likelihood by each path, you can segment visitor flow so that you can see very specifically how different segments of your visitor base behave. Go play with it now! 2011 Matt Curry mattcurry 2011-12-18T00:00:00+00:00 https://24ways.org/2011/getting-the-most-out-of-google-analytics/ business
267 Taming Complexity I’m going to step into my UX trousers for this one. I wouldn’t usually wear them in public, but it’s Christmas, so there’s nothing wrong with looking silly. Anyway, to business. Wherever I roam, I hear the familiar call for simplicity and the denouncement of complexity. I read often that the simpler something is, the more usable it will be. We understand that simple is hard to achieve, but we push for it nonetheless, convinced it will make what we build easier to use. Simple is better, right? Well, I’ll try to explore that. Much of what follows will not be revelatory to some but, like all good lessons, I think this serves as a welcome reminder that as we live in a complex world it’s OK to sometimes reflect that complexity in the products we build. Myths and legends Less is more, we’ve been told, ever since master of poetic verse Robert Browning used the phrase in 1855. Well, I’ve conducted some research, and it appears he knew nothing of web design. Neither did modernist architect Ludwig Mies van der Rohe, a later pedlar of this worthy yet contradictory notion. Broad is narrow. Tall is short. Eggs are chips. See: anyone can come up with this stuff. To paraphrase Einstein, simple doesn’t have to be simpler. In other words, simple doesn’t dictate that we remove the complexity. Complex doesn’t have to be confusing; it can be beautiful and elegant. On the web, complex can be necessary and powerful. A website that simplifies the lives of its users by offering them everything they need in one site or screen is powerful. For some, the greater the density of information, the more useful the site. In our decision-making process, principles such as Occam’s razor’s_razor (in a nutshell: simple is better than complex) are useful, but simple is for the user to determine through their initial impression and subsequent engagement. What appears simple to me or you might appear very complex to someone else, based on their own mental model or needs. We can aim to deliver simple, but they’ll be the judge. As a designer, developer, content alchemist, user experience discombobulator, or whatever you call yourself, you’re often wrestling with a wealth of material, a huge number of features, and numerous objectives. In many cases, much of that stuff is extraneous, and goes in the dustbin. However, it can be just as likely that there’s a truckload of suggested features and content because it all needs to be there. Don’t be afraid of that weight. In the right hands, less can indeed mean more, but it’s just as likely that less can very often lead to, well… less. Complexity is powerful Simple is the ability to offer a powerful experience without overwhelming the audience or inducing information anxiety. Giving them everything they need, without having them ferret off all over a site to get things done, is important. It’s useful to ask throughout a site’s lifespan, “does the user have everything they need?” It’s so easy to let our designer egos get in the way and chop stuff out, reduce down to only the things we want to see. That benefits us in the short term, but compromises the audience long-term. The trick is not to be afraid of complexity in itself, but to avoid creating the perception of complexity. Give a user a flight simulator and they’ll crash the plane or jump out. Give them everything they need and more, but make it feel simple, and you’re building a relationship, empowering people. This can be achieved carefully with what some call gradual engagement, and often the sensible thing might be to unleash complexity in carefully orchestrated phases, initially setting manageable levels of engagement and interaction, gradually increasing the inherent power of the product and fostering an empowered community. The design aesthetic Here’s a familiar scenario: the client or project lead gets overexcited and skips most of the important decision-making, instead barrelling straight into a bout of creative direction Tourette’s. Visually, the design needs to be minimal, white, crisp, full of white space, have big buttons, and quite likely be “clean”. Of course, we all like our websites to be clean as that’s more hygienic. But what do these words even mean, really? Early in a project they’re abstract distractions, unnecessary constraints. This premature narrowing forces us to think much more about throwing stuff out rather than acknowledging that what we’re building is complex, and many of the components perhaps necessary. Simple is not a formula. It cannot be achieved just by using a white background, by throwing things away, or by breathing a bellowsful of air in between every element and having it all float around in space. Simple is not a design treatment. Simple is hard. Simple requires deep investigation, a thorough understanding of every aspect of a project, in line with the needs and expectations of the audience. Recognizing this helps us empathize a little more with those most vocal of UX practitioners. They usually appreciate that our successes depend on a thorough understanding of the user’s mental models and expected outcomes. I personally still consider UX people to be web designers like the rest of us (mainly to wind them up), but they’re web designers that design every decision, and by putting the user experience at the heart of their process, they have a greater chance of finding simplicity in complexity. The visual design aesthetic — the façade — is only a part of that. Divide and conquer I’m currently working on an app that’s complex in architecture, and complex in ambition. We’ll be releasing in carefully orchestrated private phases, gradually introducing more complexity in line with the unavoidably complex nature of the objective, but my job is to design the whole, the complete system as it will be when it’s out of beta and beyond. I’ve noticed that I’m not throwing much out; most of it needs to be there. Therefore, my responsibility is to consider interesting and appropriate methods of navigation and bring everything together logically. I’m using things like smart defaults, graphical timelines and colour keys to make sense of the complexity, techniques that are sympathetic to the content. They act as familiar points of navigation and reference, yet are malleable enough to change subtly to remain relevant to the information they connect. It’s really OK to have a lot of stuff, so long as we make each component work smartly. It’s a divide and conquer approach. By finding simplicity and logic in each content bucket, I’ve made more sense of the whole, allowing me to create key layouts where most of the simplified buckets are collated and sometimes combined, providing everything the user needs and expects in the appropriate places. I’m also making sure I don’t reduce the app’s power. I need to reflect the scale of opportunity, and provide access to or knowledge of the more advanced tools and features for everyone: a window into what they can do and how they can help. I know it’s the minority who will be actively building the content, but the power is in providing those opportunities for all. Much of this will be familiar to the responsible practitioners who build websites for government, local authorities, utility companies, newspapers, magazines, banking, and we-sell-everything-ever-made online shops. Across the web, there are sites and tools that thrive on complexity. Alas, the majority of such sites have done little to make navigation intuitive, or empower audiences. Where we can make a difference is by striving to make our UIs feel simple, look wonderful, not intimidating — even if they’re mind-meltingly complex behind that façade. Embrace, empathize and tame So, there are loads of ways to exploit complexity, and make it seem simple. I’ve hinted at some methods above, and we’ve already looked at gradual engagement as a way to make sense of complexity, so that’s a big thumbs-up for a release cycle that increases audience power. Prior to each and every release, it’s also useful to rest on the finished thing for a while and use it yourself, even if you’re itching to release. ‘Ready’ often isn’t, and ‘finished’ never is, and the more time you spend browsing around the sites you build, the more you learn what to question, where to add, or subtract. It’s definitely worth building in some contingency time for sitting on your work, so to speak. One thing I always do is squint at my layouts. By squinting, I get a sort of abstract idea of the overall composition, and general feel for the thing. It makes my face look stupid, but helps me see how various buckets fit together, and how simple or complex the site feels overall. I mentioned the need to put our design egos to one side and not throw out anything useful, and I think that’s vital. I’m a big believer in economy, reduction, and removing the extraneous, but I’m usually referring to decoration, bells and whistles, and fluff. I wouldn’t ever advocate the complete removal of powerful content from a project roadmap. Above all, don’t fear complexity. Embrace and tame it. Work hard to empathize with audience needs, and you can create elegant, playful, risky, surprising, emotive, delightful, and ultimately simple things. 2011 Simon Collison simoncollison 2011-12-21T00:00:00+00:00 https://24ways.org/2011/taming-complexity/ ux
266 Collaborative Development for a Responsively Designed Web In responsive web design we’ve found a technique that allows us to design for the web as a medium in its own right: one that presents a fluid, adaptable and ever changing canvas. Until this point, we gave little thought to the environment in which users will experience our work, caring more about the aggregate than the individual. The applications we use encourage rigid layouts, whilst linear processes focus on clients signing off paintings of websites that have little regard for behaviour and interactions. The handover of pristine, pixel-perfect creations to developers isn’t dissimilar to farting before exiting a crowded lift, leaving front-end developers scratching their heads as they fill in the inevitable gaps. If you haven’t already, I recommend reading Drew’s checklist of things to consider before handing over a design. Somehow, this broken methodology has survived for the last fifteen years or so. Even the advent of web standards has had little impact. Now, as we face an onslaught of different devices, the true universality of the web can no longer be ignored. Responsive web design is just the thin end of the wedge. Largely concerned with layout, its underlying philosophy could ignite a trend towards interfaces that adapt to any number of different variables: input methods, bandwidth availability, user preference – you name it! With such adaptability, a collaborative and iterative process is required. Ethan Marcotte, who worked with the team behind the responsive redesign of the Boston Globe website, talked about such an approach in his book: The responsive projects I’ve worked on have had a lot of success combining design and development into one hybrid phase, bringing the two teams into one highly collaborative group. Whilst their process still involved the creation of desktop-centric mock-ups, these were presented to the entire team early on, where questions about how pages might adapt and behave at different sizes were asked. Mock-ups were quickly converted into HTML prototypes, meaning further decisions could be based on usage rather than guesswork (and endless hours spent in Photoshop). Regardless of the exact process, it’s clear that the relationship between our two disciplines is more crucial than ever. Yet, historically, it seems a wedge has been driven between us – perhaps a result of segregation and waterfall-style processes – resulting in animosity. So how can we improve this relationship? Ultimately, we’ll need to adapt, but even within existing workflows we can start to overlap. Simply adjusting our attitude can effect change, and bring design and development teams closer together. Good design is constant contact. Mark Otto The way we work needs to be more open and inclusive. For example, ensuring members of the development team attend initial kick-off meetings and design workshops will not only ensure technical concerns are raised, but mean that those implementing our designs better understand the problems we’re trying to solve. It can also be useful at this stage to explain how you work and the sort of deliverables you expect to produce. This will give developers a chance to make recommendations on how these can be optimized for their own needs. You may even find opportunities to share the load. On a recent project I worked on, our development partners offered to produce the interactive prototypes needed for user testing. This allowed us to concentrate on refining the experience, whilst they were able to get a head start on building the product. While developers should be involved at the beginning of projects, it’s also important that designers are able to review and contribute to a product as it’s being built. Any handover should be done in person, and ideally you’ll have a day set aside to do so. Having additional budget available for follow-up design reviews is also recommended. Learning how to use version control tools like Subversion or Git will allow you to work within the same environment as developers, and allow you to contribute code or graphic assets directly to a project if needed. Don’t underestimate the benefits of designer and developer sitting next to each other. Subtle nuances can be explored far more easily than if they were conducted over email or phone. As Ethan writes, “‘Design’ is the means, not merely the end; the path we walk over the course of a project, the choices we make”. It’s from collaboration like this that I’ve become fond of producing visual style guides. These demonstrate typographic treatments for common markup and patterns (blockquotes, lists, pagination, basic form controls and so on). Thinking in terms of components rather than individual pages not only fits in better with how a developer will implement a site, but can also ensure your design works as a coherent whole. Despite the amount of research and design produced, when it comes to the crunch, there will always be a need for compromise. As the old saying goes, ‘fast, cheap and good – pick two.’ It’s important that you know which pieces are crucial to a design and which areas can allow for movement. Pick your battles wisely. Having an agreed set of design principles can be useful when making such decisions, as they help everyone focus on the goals of the project. The best compromises are reached when both sides understand the issues of the other. Richard Rutter Ultimately, better collaboration comes through a shared understanding of the different competencies required to build a website. Instead of viewing ourselves in terms of discrete roles, we should instead look to emphasize our range of abilities, and work with others whose skills are complementary. Perhaps somebody who actively seeks to broaden their knowledge is the mark of a professional. Seek these people out. The best developers I’ve worked with have a respect for design, probably having attempted to do some themselves! Having wrangled with a few MySQL databases myself, I certainly believe the obverse is true. While knowing HTML won’t necessarily make you a better designer, it will help you understand the issues being faced by a front-end developer and, more importantly, allow you to offer solutions or alternative approaches. So take a moment to think about how you work with developers and how you could improve your relationship with them. What are you doing to ease the path towards our collaborative future? 2011 Paul Lloyd paulrobertlloyd 2011-12-05T00:00:00+00:00 https://24ways.org/2011/collaborative-development-for-a-responsively-designed-web/ business
265 Designing for Perfection Hello, 24 ways readers. I hope you’re having a nice run up to Christmas. This holiday season I thought I’d share a few things with you that have been particularly meaningful in my work over the last year or so. They may not make you wet your santa pants with new-idea-excitement, but in the context of 24 ways I think they may serve as a nice lesson and a useful seasonal reminder going into the New Year. Enjoy! Story Despite being a largely scruffy individual for most of my life, I had some interesting experiences regarding kitchen tidiness during my third year at university. As a kid, my room had always been pretty tidy, and as a teenager I used to enjoy reordering my CDs regularly (by artist, label, colour of spine – you get the picture); but by the time I was twenty I’d left most of these traits behind me, mainly due to a fear that I was turning into my mother. The one remaining anally retentive part of me that remained however, lived in the kitchen. For some reason, I couldn’t let all the pots and crockery be strewn across the surfaces after cooking. I didn’t care if they were washed up or not, I just needed them tidied. The surfaces needed to be continually free of grated cheese, breadcrumbs and ketchup spills. Also, the sink always needed to be clear. Always. Even a lone teabag, discarded casually into the sink hours previously, would give me what I used to refer to as “kitchen rage”. Whilst this behaviour didn’t cause any direct conflicts, it did often create weirdness. We would be happily enjoying a few pre-night out beverages (Jack Daniels and Red Bull – nice) when I’d notice the state of the kitchen following our round of customized 49p Tesco pizzas. Kitchen rage would ensue, and I’d have to blitz the kitchen, which usually resulted in me having to catch everyone up at the bar afterwards. One evening as we were just about to go out, I was stood there, in front of the shithole that was our kitchen with the intention of cleaning it all up, when a realization popped into my head. In hindsight, it was a pretty obvious one, but it went along the lines of “What the fuck are you doing? Sort your life out”. I sodded the washing up, rolled out with my friends, and had a badass evening of partying. After this point, whenever I got the urge to clean the kitchen, I repeated that same realization in my head. My tidy kitchen obsession strived for a level of perfection that my housemates just didn’t share, so it was ultimately pointless. It didn’t make me feel that good, either; it was like having a cigarette after months of restraint – initially joyous but soon slightly shameful. Lesson Now, around seven years later, I’m a designer on the web and my life is chaotic. It features no planning for significant events, no day-to-day routine or structure, no thought about anything remotely long-term, and I like to think I do precisely what I want. It seems my days at striving for something ordered and tidy, in most parts of my life, are long gone. For much of my time as a designer, though, it’s been a different story. I relished industry-standard terms such as ‘pixel perfection’ and ‘polished PSDs’, taking them into my stride as I strove to design everything that was put on my plate perfectly. Even down to grids and guidelines, all design elements would be painstakingly aligned to a five-pixel grid. There were no seven-pixel margins or gutters to be found in my design work, that’s for sure. I put too much pride and, inadvertently, too much ego into my work. Things took too long to create, and because of the amount of effort put into the work, significant changes, based on client feedback for example, were more difficult to stomach. Over the last eighteen months I’ve made a conscious effort to change the way I approach designing for the web. Working on applications has probably helped with this; they seem to have a more organic development than rigid content-based websites. Mostly though, a realization similar to my kitchen rage one came about when I had to make significant changes to a painstakingly crafted Photoshop document I had created. The changes shouldn’t have been difficult or time-consuming to implement, but they were turning out to be. One day, frustrated with how long it was taking, the refrain “What the fuck are you doing? Sort your life out” again entered my head. I blazed the rest of the work, not rushing or doing scruffy work, but just not adhering to the insane levels of perfection I had previously set for myself. When the changes were presented, everything went down swimmingly. The client in this case (and I’d argue most cases) cared more about the ideas than the perfect way in which they had been implemented. I had taken myself and my ego out of the creative side of the work, and it had been easier to succeed. Argument I know many other designers who work on the web share such aspirations to perfection. I think it’s a common part of the designer DNA, but I’m not sure it really has a place when designing for the web. First, there’s the environment. The landscape in which we work is continually shifting and evolving. The inherent imperfection of the medium itself makes attempts to create perfect work for it redundant. Whether you consider it a positive or negative point, the products we make are never complete. They’re always scaling and changing. Like many aspects of web design, this striving for perfection in our design work is a way of thinking borrowed from other design industries where it’s more suited. A physical product cannot be as easily altered or developed after it has been manufactured, so the need to achieve perfection when designing is more apt. Designers who can relate to anything I’ve talked about can easily let go of that anal retentiveness if given the right reasons to do so. Striving for perfection isn’t a bad thing, but I simply don’t think it can be achieved in such a fast-moving, unique industry. I think design for the web works better when it begins with quick and simple, followed by iteration and polish over time. To let go of ego and to publish something that you’re not completely happy with is perhaps the most difficult part of the job for designers like us, but it’s followed by a satisfaction of knowing your product is alive and breathing, whereas others (possibly even competitors) may still be sitting in Photoshop, agonizing over whether a margin should be twenty or forty pixels. I keep telling myself to stop sitting on those two hundred ideas that are all half-finished. Publish them, clean them up and iterate over time. I’ve been telling myself this for months and, hopefully, writing this article will give me the kick in the arse I need. Hopefully, it will also give someone else the same kick. 2011 Greg Wood gregwood 2011-12-17T00:00:00+00:00 https://24ways.org/2011/designing-for-perfection/ process
264 Dynamic Social Sharing Images Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you’d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days! With so many social channels and a proliferation of content and promotions flying past in everyone’s streams, it’s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader’s attention if you’re trying to get as many eyes as possible on that sweet, sweet social content. One of the best ways to grab attention with your posts or tweets is to include an image. There’s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much. So without hard stats to quote, we’ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them. Adding sharing images The process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified. <meta property="og:image" content="https://example.com/my_image.jpg"> There’s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing. This is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don’t necessarily have an image? One approach is to use stock photography, but that’s not going to be right for every situation. This was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don’t usually lend themselves directly to being the main ‘hero’ image for an article. Putting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article. One of the hand-made sharing images from 2017 Each day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this. Hatching a new plan One initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility. The more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS! In fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn’t this just be another page on the site generated by the CMS? Breaking it down, I figured the steps needed would be something like: Create the CSS to lay out a component that could be turned into an image Add a new field to articles in the CMS to hold a handpicked quote Build a new article template in the CMS to output the author name and quote dynamically for any article … um … screenshot? I thought I’d get cracking and see if I could figure out the final steps later. Building the page The first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul. Paul’s code uses a fixed dimension container in the HTML, set to 600 × 315px. This is to make it the correct aspect ratio for Facebook’s recommended image size. It’s useful to remember here that it doesn’t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser. With the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul’s markup. I also added the quote as a new field on the article in the CMS, so each ‘image’ could be quickly and easily customised in the editing process. I added a new field to the article template to capture the quote. With very little effort, we quickly had a page to dynamically generate our ‘image’ right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article. Our automatically generated layout direct from the CMS It soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that’s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought… how could I automatically take a screenshot? Enter Puppeteer Puppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It’s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface. Headless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or… get this… rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node. Using Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that’s it’s actually Puppeteer’s ‘hello world’ example. First you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don’t need to worry about installing that. At the command line: npm i puppeteer Then save a new file as example.js with this code: const puppeteer = require('puppeteer'); (async () => { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://example.com'); await page.screenshot({path: 'example.png'}); await browser.close(); })(); and then run it using Node: node example.js This will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow: Launch a browser Open up a new page Goto a URL Take a screenshot Close the browser The async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That’s useful with actions like loading a web page that might take some time to complete. They’re used with Promises, and the effect is to make asynchronous code behave as if it’s synchronous. You can read more about async and await at MDN if you’re interested. That’s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there? Our content is up in the corner of the image with lots of empty space. That’s not great. It’s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 × 600, so we need to find out how to adjust that. Fortunately, the docs aren’t the worst and I was able to find the page.setViewport() method pretty easily. const puppeteer = require('puppeteer'); (async () => { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing'); await page.setViewport({ width: 600, height: 315 }); await page.screenshot({path: 'example.png'}); await browser.close(); })(); This worked! The screenshot is now 600 × 315 as expected. That’s exactly what we asked for. Trouble is, that’s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels. await page.setViewport({ width: 600, height: 315, deviceScaleFactor: 2 }); Perfect! We now have a programmatic way of turning a URL into an image. Improving the script Rather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site’s build process to automate the generation of the image. Our goal is to call the script like this: node shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/ And I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it. // Get the URL and the slug segment from it const url = process.argv[2]; const segments = url.split('/'); // Get the second-to-last segment (the slug) const slug = segments[segments.length-2]; We can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page. (async () => { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto(url + 'sharing'); await page.setViewport({ width: 600, height: 315, deviceScaleFactor: 2 }); await page.screenshot({path: slug + '.png'}); await browser.close(); })(); Once you’re generating the image with Node, there’s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project. You can also run optimisations on the file. I’m using imagemin with pngquant to reduce the file size a little. const imagemin = require('imagemin'); const imageminPngquant = require('imagemin-pngquant'); await imagemin([slug + '.png'], 'build', { plugins: [ imageminPngquant({quality: '75-90'}) ] }); You can see the completed example as a gist. Integrating it with your CMS So we now have a command we can run to take a URL and generate a custom image for that URL. It’s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you’re using, but it’s likely not too hard as long as you can run a command as part of the process. For 24 ways this year, I’ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I’ll look to automate this one step further. We may also look at having a few slightly different layouts to choose from, so that each day isn’t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there’s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities! By using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we’ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images. I hope you’ll give it a try! 2018 Drew McLellan drewmclellan 2018-12-24T00:00:00+00:00 https://24ways.org/2018/dynamic-social-sharing-images/ code
263 Securing Your Site like It’s 1999 Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities. As the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned. What follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate. Bad input validation: Trusting anything the user sends you Our story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web. One such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong. One day, a player discovered a hidden field in the forum’s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum’s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player’s profile. Needless to say, this idyllic online community descended into chaos. Users changed each other’s passwords, deleted each other’s messages, and attacked each-other under the cover of complete anonymity. What happened? There aren’t any official rules for developing software on the web. But if there were, my golden rule would be: Never trust user input. Ever. Always ask yourself how users will send you data that isn’t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe. Make sure you validate user input to make sure it’s of the correct type (e.g. string, number, JSON string) and that it’s the length that you were expecting. Don’t forget that user input doesn’t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML. Make sure to check a user’s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum’s owner. Finally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world. SQL injection: Allowing the user to run their own database queries A long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I’d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day. One day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn’t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned. It turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it? ' OR '1'='1 SQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this: SELECT COUNT(*) FROM USERS WHERE USERNAME='Alice' AND PASSWORD='hunter2' This query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let’s see what happens when we use our magic password instead! SELECT COUNT(*) FROM USERS WHERE USERNAME='Admin' AND PASSWORD='' OR '1'='1' Does the password look like part of the query to you? That’s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum! So how can you protect yourself from SQL injection? Never build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize. Expert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can! Cross site request forgery: Getting other users to do your dirty work for you Do you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge. Have you ever clicked on a hyperlink, only to find something that you weren’t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky… Let’s just say there are older and fouler things than Rick Astley in the dark places of the web. What if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it: <img src="http://www.netflix.com/JSON/AddToQueue?movieid=70110672" /> Did you notice the source URL of the image? It’s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user’s name and shipping address, and you could ship yourself DVDs completely free of charge! This attack is possible when websites unconditionally trust a user’s session cookies without checking where HTTP requests come from. The first check you can make is to verify that a request’s origin and referer headers match the location of the website. These headers can’t be programmatically set. Another check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren’t stored in cookies. You can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers. Cross site scripting: Someone else’s code running on your website In 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends. Samy enjoyed using MySpace which, at the time, was the world’s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn’t have to wade through photos of avocado toast back then… Samy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject <img /> and <div /> tags into his headline, but <script /> was filtered. MySpace wasn’t about to let someone else run their own code on MySpace. Intrigued, Samy set about finding out exactly what he could do with <img /> and <div /> tags. He found that you could add style properties to <div /> tags to style them with CSS. <div style="background:url('javascript:alert(1)')"> This code only worked in Internet Explorer and in some versions of Safari, but that was plenty of people to befriend. However, MySpace was prepared for this: they also filtered the word javascript from <div />. <div style="background:url('java script:alert(1)')"> Samy discovered that by inserting a line break into his code, MySpace would not filter out the word javascript. The browser would continue to run the code just fine! Samy had now broken past MySpace’s first line of defence and was able to start running code on his profile page. Now he started looking at what he could do with that code. alert(document.body.innerHTML) Samy wondered if he could inspect the page’s source to find the details of other MySpace users to befriend. To do this, you would normally use document.body.innerHTML, but MySpace had filtered this too. alert(eval('document.body.inne' + 'rHTML')) This isn’t a problem if you build up JavaScript code inside a string and execute it using the eval() function. This trick also worked with XMLHttpRequest.onReadyStateChange, which allowed Samy to send friend requests to the MySpace API and install the JavaScript code on his new friends’ pages. One final obstacle stood in his way. The same origin policy is a security mechanism that prevents scripts hosted on one domain interacting with sites hosted on another domain. if (location.hostname == 'profile.myspace.com') { document.location = 'http://www.myspace.com' + location.pathname + location.search } Samy discovered that only the http://www.myspace.com domain would accept his API requests, and requests from http://profile.myspace.com were being blocked by the browser’s same-origin policy. By redirecting the browser to http://www.myspace.com, he discovered that he could load profile pages and successfully make requests to MySpace’s API. Samy installed this code on his profile page, and he waited. Over the course of the next day, over a million people unwittingly installed Samy’s code into their MySpace profile pages and invited their friends. The load of friend requests on MySpace was so large that the site buckled and shut down. It took them two hours to remove Samy’s code and patch the security holes he exploited. Samy was raided by the United States secret service and sentenced to do 90 days of community service. This is the power of installing a little bit of JavaScript on someone else’s website. It is called cross site scripting, and its effects can be devastating. It is suspected that cross-site scripting was to blame for the 2018 British Airways breach that leaked the credit card details of 380,000 people. So how can you help protect yourself from cross-site scripting? Always sanitise user input when it comes in, using a library such as sanitize-html. Open source tools like this benefit from hundreds of hours of work from dozens of experienced contributors. Don’t be tempted to roll your own protection. MySpace was prepared, but they were not prepared enough. It makes no sense to turn this kind of help down. You can also use an auto-escaping templating language to make sure nobody else’s HTML can get into your pages. Both Angular and React will do this for you, and they are extremely convenient to use. You should also implement a content security policy to restrict the domains that content like scripts and stylesheets can be loaded from. Loading content from sites not under your control is a significant security risk, and you should use a CSP to lock this down to only the sources you trust. CSP can also block the use of the eval() function. For content not under your control, consider setting up sub-resource integrity protection. This allows you to add hashes to stylesheets and scripts you include on your website. Hashes are like fingerprints for digital files; if the content changes, so does the fingerprint. Adding hashes will allow your browser to keep your site safe if the content changes without you knowing. npm audit: Protecting yourself from code you don’t own JavaScript and npm run the modern web. Together, they make it easy to take advantage of the world’s largest public registry of open source software. How do you protect yourself from code written by someone you’ve never met? Enter npm audit. npm audit reviews the security of your website’s dependency tree. You can start using it by upgrading to the latest version of npm: npm install npm -g npm audit When you run npm audit, npm submits a description of your dependencies to the Registry, which returns a report of known vulnerabilities for the packages you have installed. If your website has a known cross-site scripting vulnerability, npm audit will tell you about it. What’s more, if the vulnerability has been patched, running npm audit fix will automatically install the patched package for you! Securing your site like it’s 2019 The truth is that since the early days of the web, the stakes of a security breach have become much, much higher. The web is so much more than fandom and mailing DVDs - online banking is now mainstream, social media and dating websites store intimate information about our personal lives, and we are even inviting the internet into our homes. However, we have powerful new allies helping us stay safe. There are more resources than ever before to teach us how to write secure code. Tools like Angular and React are designed with security features baked-in from the start. We have a new generation of security tools like npm audit to watch over our dependencies. As we roll over into 2019, let’s take the opportunity to reflect on the security of the code we write and be grateful for the everything we’ve learned in the last twenty years. 2018 Katie Fenn katiefenn 2018-12-01T00:00:00+00:00 https://24ways.org/2018/securing-your-site-like-its-1999/ code
262 Be the Villain Inclusive Design is the practice of making products and services accessible to, and usable by as many people as reasonably possible without the need for specialized accommodations. The practice was popularized by author and User Experience Design Director Kat Holmes. If getting you to discover her work is the only thing this article succeeds in doing then I’ll consider it a success. As a framework for creating resilient solutions to problems, Inclusive Design is incredible. However, the aimless idealistic aspirations many of its newer practitioners default to can oftentimes run into trouble. Without outlining concrete, actionable outcomes that are then vetted by the people you intend to serve, there is the potential to do more harm than good. When designing, you take a user flow and make sure it can’t be broken. Ensuring that if something is removed, it can be restored. Or that something editable can also be updated at a later date—you know, that kind of thing. What we want to do is avoid surprises. Much like a water slide with a section of pipe missing, a broken flow forcibly ejects a user, to great surprise and frustration. Interactions within a user flow also have to be small enough to be self-contained, so as to avoid creating a none pizza with left beef scenario. Lately, I’ve been thinking about how to expand on this practice. Watertight user flows make for a great immediate experience, but it’s all too easy to miss the forest for the trees when you’re a product designer focused on cranking out features. What I’m concerned about is while to trying to envision how a user flow could be broken, you also think about how it could be subverted. In addition to preventing the removal of a section of water slide, you also keep someone from mugging the user when they shoot out the end. If you pay attention, you’ll start to notice this subversion with increasing frequency: Domestic abusers using internet-controlled devices to spy on and control their partner. Zealots tanking a business’ rating on Google because its owners spoke out against unchecked gun violence. Forcing people to choose between TV or stalking because the messaging center portion of a cable provider’s entertainment package lacks muting or blocking features. White supremacists tricking celebrities into endorsing anti-Semitic conspiracy theories. Facebook repeatedly allowing housing, credit, and employment advertisers to discriminate against users by their race, ability, and religion. White supremacists also using a video game chat service as a recruiting tool. The unchecked harassment of minors on Instagram. Swatting. If I were to guess why we haven’t heard more about this problem, I’d say that optimistically, people have settled out of court. Pessimistically, it’s most likely because we ignore, dismiss, downplay, and suppress those who try to bring it to our attention. Subverted design isn’t the practice of employing Dark Patterns to achieve your business goals. If you are not familiar with the term, Dark Patterns are the use of cheap user interface tricks and psychological manipulation to get users to act against their own best interests. User Experience consultant Chris Nodder wrote Evil By Design, a fantastic book that unpacks how to detect and think about them, if you’re interested in this kind of thing Subverted design also isn’t beholden design, or simple lack of attention. This phenomenon isn’t even necessarily premeditated. I think it arises from naïve (or willfully ignorant) design decisions being executed at a historically unprecedented pace and scale. These decisions are then preyed on by the shrewd and opportunistic, used to control and inflict harm on the undeserving. Have system, will game. This is worth discussing. As the field of design continues to industrialize empathy, it also continues to ignore the very established practice of threat modeling. Most times, framing user experience in terms of how to best funnel people into a service comes with an implicit agreement that the larger system that necessitates the service is worth supporting. To achieve success in the eyes of their superiors, designers may turn to emotional empathy exercises. By projecting themselves into the perceived surface-level experiences of others, they play-act at understanding how to nudge their targeted demographics into a conversion funnel. This roleplaying exercise has the effect of scoping concerns to the immediate, while simultaneously reinforcing the idea of engagement at all cost within the identified demographic. The thing is, pure engagement leaves the door wide open for bad actors. Even within the scope of a limited population, the assumption that everyone entering into the funnel is acting with good intentions is a poor one. Security researchers, network administrators, and other professionals who practice threat modeling understand that the opposite is true. By preventing everyone save for well-intentioned users from operating a system within the parameters you set for them, you intentionally limit the scope of abuse that can be enacted. Don’t get me wrong: being able to escort as many users as you can to the happy path is a foundational skill. But we should also be having uncomfortable conversations about why something unthinkable may in fact not be. They’re not going to be fun conversations. It’s not going to be easy convincing others that these aren’t paranoid delusions best tucked out of sight in the darkest, dustiest corner of the backlog. Realistically, talking about it may even harm your career. But consider the alternative. The controlled environment of the hypothetical allows us to explore these issues without propagating harm. Better to be viewed as the office’s resident villain than to have to live with something like this: If the past few years have taught us anything, it’s that the choices we make—or avoid making—have consequences. Design has been doing a lot of growing up as of late, including waking up to the idea that technology isn’t neutral. You’re going to have to start thinking the way a monster does—if you can imagine it, chances are someone else can as well. To get into this kind of mindset, inverting the Inclusive Design Principles is a good place to start: Providing a comparable experience becomes forcing a single path. Considering situation becomes ignoring circumstance. Being consistent becomes acting capriciously. Giving control becomes removing autonomy. Offering choice becomes limiting options. Prioritizing content becomes obfuscating purpose. Adding value becomes filling with gibberish. Combined, these inverted principles start to paint a picture of something we’re all familiar with: a half-baked, unscrupulous service that will jump at the chance to take advantage of you. This environment is also a perfect breeding ground for spawning bad actors. These kinds of services limit you in the ways you can interact with them. They kick you out or lock you in if you don’t meet their unnamed criteria. They force you to parse layout, prices, and policies that change without notification or justification. Their controls operate in ways that are unexpected and may shift throughout the experience. Their terms are dictated to you, gaslighting you to extract profit. Heaps of jargon and flashy, unnecessary features are showered on you to distract from larger structural and conceptual flaws. So, how else can we go about preventing subverted design? Marli Mesibov, Content Strategist and Managing Editor of UX Booth, wrote a brilliant article about how to use Dark Patterns for good—perhaps the most important takeaway being admitting you have a problem in the first place. Another exercise is asking the question, “What is the evil version of this feature?” Ask it during the ideation phase. Ask it as part of acceptance criteria. Heck, ask it over lunch. I honestly don’t care when, so long as the question is actually raised. In keeping with the spirit of this article, we can also expand on this line of thinking. Author, scientist, feminist, and pacifist Ursula Franklin urges us to ask, “Whose benefits? Whose risks?” instead of “What benefits? What risks?” in her talk, When the Seven Deadly Sins Became the Seven Cardinal Virtues. Inspired by the talk, Ethan Marcotte discusses how this relates to the web platform in his powerful post, Seven into seven. Few things in this world are intrinsically altruistic or good—it’s just the nature of the beast. However, that doesn’t mean we have to stand idly by when harm is created. If we can add terms like “anti-pattern” to our professional vocabulary, we can certainly also incorporate phrases like “abuser flow.” Design finally got a seat at the table. We should use this newfound privilege wisely. Listen to women. Listen to minorities, listen to immigrants, the unhoused, the less economically advantaged, and the less technologically-literate. Listen to the underrepresented and the underprivileged. Subverted design is a huge problem, likely one that will never completely go away. However, the more of us who put the hard work into being the villain, the more we can lessen the scope of its impact. 2018 Eric Bailey ericbailey 2018-12-06T00:00:00+00:00 https://24ways.org/2018/be-the-villain/ ux
261 Surviving—and Thriving—as a Remote Worker Remote work is hot right now. Many people even say that remote work is the future. Why should a company limit itself to hiring from a specific geographic location when there’s an entire world of talent out there? I’ve been working remotely, full-time, for five and a half years. I’ve reached the point where I can’t even fathom working in an office. The idea of having to wake up at a specific time and commute into an office, work for eight hours, and then commute home, feels weirdly anachronistic. I’ve grown attached to my current level of freedom and flexibility. However, it took me a lot of trial and error to reach success as a remote worker — and sometimes even now, I slip up. Working remotely requires a great amount of discipline, independence, and communication. It can feel isolating, especially if you lean towards the more extroverted side of the social spectrum. Remote working isn’t for everyone, but most people, with enough effort, can make it work — or even thrive. Here’s what I’ve learned in over five years of working remotely. Experiment with your environment As a remote worker, you have almost unprecedented control of your environment. You can often control the specific desk and chair you use, how you accessorize your home office space — whether that’s a dedicated office, a corner of your bedroom, or your kitchen table. (Ideally, not your couch… but I’ve been there.) Hate fluorescent lights? Change your lightbulbs. Cover your work area in potted plants. Put up blackout curtains and work in the dark like a vampire. Whatever makes you feel most comfortable and productive, and doesn’t completely destroy your eyesight. Working remotely doesn’t always mean working from home. If you don’t have a specific reason you need to work from home (like specialized equipment), try working from other environments (which is especially helpful it you have roommates, or children). Cafes are the quintessential remote worker hotspot, but don’t just limit yourself to your favorite local haunt. More cities worldwide are embracing co-working spaces, where you can rent either a roaming spot or a dedicated desk. If you’re a social person, this is a great way to build community in your work environment. Most have phone rooms, so you can still take calls. Co-working spaces can be expensive, and not everyone has either the extra income, or work-provided stipend, to work from one. Local libraries are also a great work location. They’re quiet, usually have free wi-fi, and you have the added bonus of being able to check out books after work instead of, ahem, spending too much money on Kindle books. (I know most libraries let you check out ebooks, but reader, I am impulsive and impatient person. When I want a book now, I mean now.) Just be polite — make sure your headphones don’t leak, and don’t work from a library if you have a day full of calls. Remember, too, that you don’t have to stay in the same spot all day. It’s okay to go out for lunch and then resume work from a different location. If you find yourself getting restless, take a walk. Wash some dishes while you mull through a problem. Don’t force yourself to sit at your desk for eight hours if that doesn’t work for you. Set boundaries If you’re a workaholic, working remotely can be a challenge. It’s incredibly easy to just… work. All the time. My work computer is almost always with me. If I remember at 11pm that I wanted to do something, there’s nothing but my own willpower keeping me from opening up my laptop and working until 2am. Some people are naturally disciplined. Some have discipline instilled in them as children. And then some, like me, are undisciplined disasters that realize as adults that wow, I guess it’s time to figure this out, eh? Learning how to set boundaries is one of the most important lessons I’ve learned working remotely. (And honestly, it’s something I still struggle with). For a long time, I had a bad habit of waking up, checking my phone for new Slack messages, seeing something I need to react to, and then rolling over to my couch with my computer. Suddenly, it’s noon, I’m unwashed, unfed, starting to get a headache, and wondering why suddenly I hate all of my coworkers. Even when I finally tear myself from my computer to shower, get dressed, and eat, the damage is done. The rest of my day is pretty much shot. I recently had a conversation with a coworker, in which she remarked that she used to fill her empty time with work. Wake up? Scroll through Slack and email before getting out of bed. Waiting in line for lunch? Check work. Hanging out on her couch in the evening? You get the drift. She was only able to break the habit after taking a three month sabbatical, where she had no contact with work the entire time. I too had just returned from my own sabbatical. I took her advice, and no longer have work Slack on my phone, unless I need it for an event. After the event, I delete it. I also find it too easy to fill empty time with work. Now, I might wake up and procrastinate by scrolling through other apps, but I can’t get sucked into work before I’m even dressed. I’ve gotten pretty good at forbidding myself from working until I’m ready, but building any new habit requires intentionality. Something else I experimented with for a while was creating a separate account on my computer for social tasks, so if I wanted to hang out on my computer in the evening, I wouldn’t get distracted by work. It worked exceptionally well. The only problems I encountered were technical, like app licensing and some of my work proxy configurations. I’ve heard other coworkers have figured out ways to work through these technical issues, so I’m hoping to give it another try soon. You might noticed that a lot of these ideas are just hacks for making myself not work outside of my designated work times. It’s true! If you’re a more disciplined person, you might not need any of these coping mechanisms. If you’re struggling, finding ways to subvert your own bad habits can be the difference between thriving or burning out. Create intentional transition time I know it’s a stereotype that people who work from home stay in their pajamas all day, but… sometimes, it’s very easy to do. I’ve found that in order to reach peak focus, I need to create intentional transition time. The most obvious step is changing into different clothing than I woke up in. Ideally, this means getting dressed in real human clothing. I might decide that it’s cold and gross out and I want to work in joggers and a hoody all day, but first, I need to change out of my pajamas, put on a bra, and then succumb to the lure of comfort. I’ve found it helpful to take similar steps at the end of my day. If I’ve spent the day working from home, I try to end my day with something that occupies my body, while letting my mind unwind. Often, this is doing some light cleaning or dinner prep. If I try to go straight into another mentally heavy task without allowing myself this transition time, I find it hard to context switch. This is another reason working from outside your home is advantageous. Commutes, even if it’s a ten minute walk down the road, are great transition time. Lunch is a great transition time. You can decompress between tasks by going out for lunch, or cooking and eating lunch in your kitchen — not next to your computer. Embrace async If you’re used to working in an office, you’ve probably gotten pretty used to being able to pop over to a colleague’s desk if you need to ask a question. They’re pretty much forced to engage with you at that point. When you’re working remotely, your coworkers might not be in the same timezone as you. They might take an hour to finish up a task before responding to you, or you might not get an answer for your entire day because dangit Gary’s in Australia and it’s 3am there right now. For many remote workers, that’s part of the package. When you’re not co-located, you have to build up some patience and tolerance around waiting. You need to intentionally plan extra time into your schedule for waiting on answers. Asynchronous communication is great. Not everyone can be present for every meeting or office conversation — and the same goes for working remotely. However, when you’re remote, you can read through your intranet messages later or scroll back a couple hours in Slack. My company has a bunch of internal blogs (“p2s”) where we record major decisions and hold asynchronous conversations. I feel like even if I missed a meeting, or something big happened while I was asleep, I can catch up later. We have a phrase — “p2 or it didn’t happen.” Working remotely has made me a better communicator largely because I’ve gotten into the habit of making written updates. I’ve also trained myself to wait before responding, which allows me to distance myself from what could potentially be an emotional reaction. (On the internet, no one can see you making that face.) Having the added space that comes from not being in the same physical location with somebody else creates an opportunity to rein myself in and take the time to craft an appropriate response, without having the pressure of needing to reply right meow. Lean into it! (That said, if you’re stuck, sometimes the best course of action is to hop on a video call with someone and hash out the details. Use the tools most appropriate for the problem. They invented Zoom for a reason.) Seek out social opportunities Even introverts can feel lonely or isolated. When you work remotely, there isn’t a built-in community you’re surrounded by every day. You have to intentionally seek out social opportunities that an office would normally provide. I have a couple private Slack channels where I can joke around with work friends. Having that kind of safe space to socialize helps me feel less alone. (And, if the channels get too noisy, I can mute them for a couple hours.) Every now and then, I’ll also hop on a video call with some work friends and just hang out for a little while. It feels great to actually see someone laugh. If you work from a co-working space, that space likely has events. My co-working space hosts social hours, holiday parties, and sometimes even lunch-and-learns. These events are great opportunities for making new friends and forging professional connections outside of work. If you don’t have access to a co-working space, your town or city likely has meetups. Create a Meetup.com account and search for something that piques your interest. If you’ve been stuck inside your house for days, heads-down on a hard deadline, celebrate by getting out of the house. Get coffee or drinks with friends. See a show. Go to a religious service. Take a cooking class. Try yoga. Find excuses to be around someone other than your cats. When you can’t fall back on your work to provide community, you need to build your own. These are tips that I’ve found help me, but not everyone works the same way. Remember that it’s okay to experiment — just because you’ve worked one way, doesn’t mean that’s the best way for you. Check in with yourself every now and then. Are you happy with your work environment? Are you feeling lonely, down, or exhausted? Try switching up your routine for a couple weeks and jot down how you feel at the end of each day. Look for patterns. You deserve to have a comfortable and productive work environment! Hope to see you all online soon 🙌 2018 Mel Choyce melchoyce 2018-12-09T00:00:00+00:00 https://24ways.org/2018/thriving-as-a-remote-worker/ process
260 The Art of Mathematics: A Mandala Maker Tutorial In front-end development, there’s often a great deal of focus on tools that aim to make our work more efficient. But what if you’re new to web development? When you’re just starting out, the amount of new material can be overwhelming, particularly if you don’t have a solid background in Computer Science. But the truth is, once you’ve learned a little bit of JavaScript, you can already make some pretty impressive things. A couple of years back, when I was learning to code, I started working on a side project. I wanted to make something colorful and fun to share with my friends. This is what my app looks like these days: Mandala Maker user interface The coolest part about it is the fact that it’s a tool: anyone can use it to create something original and brand new. In this tutorial, we’ll build a smaller version of this app – a symmetrical drawing tool in ES5, JavaScript and HTML5. The tutorial app will have eight reflections, a color picker and a Clear button. Once we’re done, you’re on your own and can tweak it as you please. Be creative! Preparations: a blank canvas The first thing you’ll need for this project is a designated drawing space. We’ll use the HTML5 canvas element and give it a width and a height of 600px (you can set the dimensions to anything else if you like). Files Create 3 files: index.html, styles.css, main.js. Don’t forget to include your JS and CSS files in your HTML. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <link rel="stylesheet" type="text/css" href="style.css"> <script src="main.js"></script> </head> <body onload="init()"> <canvas width="600" height="600"> <p>Your browser doesn't support canvas.</p> </canvas> </body> </html> I’ll ask you to update your HTML file at a later point, but the CSS file we’ll start with will stay the same throughout the project. This is the full CSS we are going to use: body { background-color: #ccc; text-align: center; } canvas { touch-action: none; background-color: #fff; } button { font-size: 110%; } Next steps We are done with our preparations and ready to move on to the actual tutorial, which is made up of 4 parts: Building a simple drawing app with one line and one color Adding a Clear button and a color picker Adding more functionality: 2 line drawing (add the first reflection) Adding more functionality: 8 line drawing (add 6 more reflections!) Interactive demos This tutorial will be accompanied by four CodePens, one at the end of each section. In my own app I originally used mouse events, and only added touch events when I realized mobile device support was (A) possible, and (B) going to make my app way more accessible. For the sake of code simplicity, I decided that in this tutorial app I will only use one event type, so I picked a third option: pointer events. These are supported by some desktop browsers and some mobile browsers. An up-to-date version of Chrome is probably your best bet. Part 1: A simple drawing app Let’s get started with our main.js file. Our basic drawing app will be made up of 6 functions: init, drawLine, stopDrawing, recordPointerLocation, handlePointerMove, handlePointerDown. It also has nine variables: var canvas, context, w, h, prevX = 0, currX = 0, prevY = 0, currY = 0, draw = false; The variables canvas and context let us manipulate the canvas. w is the canvas width and h is the canvas height. The four coordinates are used for tracking the current and previous location of the pointer. A short line is drawn between (prevX, prevY) and (currX, currY) repeatedly many times while we move the pointer upon the canvas. For your drawing to appear, three conditions must be met: the pointer (be it a finger, a trackpad or a mouse) must be down, it must be moving and the movement has to be on the canvas. If these three conditions are met, the boolean draw is set to true. 1. init Responsible for canvas set up, this listens to pointer events and the location of their coordinates and sets everything in motion by calling other functions, which in turn handle touch and movement events. function init() { canvas = document.querySelector("canvas"); context = canvas.getContext("2d"); w = canvas.width; h = canvas.height; canvas.onpointermove = handlePointerMove; canvas.onpointerdown = handlePointerDown; canvas.onpointerup = stopDrawing; canvas.onpointerout = stopDrawing; } 2. drawLine This is called to action by handlePointerMove() and draws the pointer path. It only runs if draw = true. It uses canvas methods you can read about in the canvas API documentation. You can also learn to use the canvas element in this tutorial. lineWidth and linecap set the properties of our paint brush, or digital pen, but pay attention to beginPath and closePath. Between those two is where the magic happens: moveTo and lineTo take canvas coordinates as arguments and draw from (a,b) to (c,d), which is to say from (prevX,prevY) to (currX,currY). function drawLine() { var a = prevX, b = prevY, c = currX, d = currY; context.lineWidth = 4; context.lineCap = "round"; context.beginPath(); context.moveTo(a, b); context.lineTo(c, d); context.stroke(); context.closePath(); } 3. stopDrawing This is used by init when the pointer is not down (onpointerup) or is out of bounds (onpointerout). function stopDrawing() { draw = false; } 4. recordPointerLocation This tracks the pointer’s location and stores its coordinates. Also, you need to know that in computer graphics the origin of the coordinate space (0,0) is at the top left corner, and all elements are positioned relative to it. When we use canvas we are dealing with two coordinate spaces: the browser window and the canvas itself. This function converts between the two: it subtracts the canvas offsetLeft and offsetTop so we can later treat the canvas as the only coordinate space. If you are confused, read more about it. function recordPointerLocation(e) { prevX = currX; prevY = currY; currX = e.clientX - canvas.offsetLeft; currY = e.clientY - canvas.offsetTop; } 5. handlePointerMove This is set by init to run when the pointer moves. It checks if draw = true. If so, it calls recordPointerLocation to get the path and drawLine to draw it. function handlePointerMove(e) { if (draw) { recordPointerLocation(e); drawLine(); } } 6. handlePointerDown This is set by init to run when the pointer is down (finger is on touchscreen or mouse it clicked). If it is, calls recordPointerLocation to get the path and sets draw to true. That’s because we only want movement events from handlePointerMove to cause drawing if the pointer is down. function handlePointerDown(e) { recordPointerLocation(e); draw = true; } Finally, we have a working drawing app. But that’s just the beginning! See the Pen Mandala Maker Tutorial: Part 1 by Hagar Shilo (@hagarsh) on CodePen. Part 2: Add a Clear button and a color picker Now we’ll update our HTML file, adding a menu div with an input of the type and class color and a button of the class clear. <body onload="init()"> <canvas width="600" height="600"> <p>Your browser doesn't support canvas.</p> </canvas> <div class="menu"> <input type="color" class="color" /> <button type="button" class="clear">Clear</button> </div> </body> Color picker This is our new color picker function. It targets the input element by its class and gets its value. function getColor() { return document.querySelector(".color").value; } Up until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We’ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor. function drawLine() { //...code... context.strokeStyle = getColor(); context.lineWidth = 4; context.lineCap = "round"; //...code... } Clear button This is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing. function clearCanvas() { if (confirm("Want to clear?")) { context.clearRect(0, 0, w, h); } } The method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner. If we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up. Let’s add this event handler to init: function init() { //...code... canvas.onpointermove = handleMouseMove; canvas.onpointerdown = handleMouseDown; canvas.onpointerup = stopDrawing; canvas.onpointerout = stopDrawing; document.querySelector(".clear").onclick = clearCanvas; } See the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen. Part 3: Draw with 2 lines It’s time to make a line appear where no pointer has gone before. A ghost line! For that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it’s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn’t matter which one we choose. Let’s go with the x-axis. Here is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!). Now, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal. A sketch illustrating the mathematics of reflecting a point. What happens to the x coordinates? The variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them “the x coordinates”. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c. What happens to the y coordinates? What about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height. This is the new code for the app’s variables and the two lines: the one that fills the pointer’s path and the one mirroring it across the x-axis. function drawLine() { var a = prevX, a_ = a, b = prevY, b_ = h-b, c = currX, c_ = c, d = currY, d_ = h-d; //... code ... // Draw line #1, at the pointer's location context.moveTo(a, b); context.lineTo(c, d); // Draw line #2, mirroring the line #1 context.moveTo(a_, b_); context.lineTo(c_, d_); //... code ... } In case this was too abstract for you, let’s look at some actual numbers to see how this works. Let’s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3. So b' = 10 - 2 = 8 and d' = 10 - 3 = 7. We use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That’s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior. If you are still confused, I don’t blame you. Here is the result. Draw something and see what happens. See the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen. Part 4: Draw with 8 lines I have made yet another confusing sketch, with points C and D, so you understand what we’re trying to do. Later on we’ll look at points E, F, G and H as well. The circled point is the one we’re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas. A sketch illustrating points C and D. This is the part where the math gets a bit mathier, as our drawLine function evolves further. We’ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let’s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different). function drawLine() { //... code ... // Reassign values a_ = w-a; b_ = b; c_ = w-c; d_ = d; // Draw the 3rd line context.moveTo(a_, b_); context.lineTo(c_, d_); // Reassign values a_ = w-a; b_ = h-b; c_ = w-c; d_ = h-d; // Draw the 4th line context.moveTo(a_, b_); context.lineTo(c_, d_); //... code ... What is happening? You might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That’s because we want the symmetry to hold for a rectangular canvas as well, and this way it will. Also, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It’s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous). What happens to the x coordinates? As you recall, our x coordinates are a (prevX) and c (currX). For the third line we are adding, a' = w - a and c' = w - c, which means… For the fourth line, the same thing happens to our x coordinates a and c. What happens to the y coordinates? As you recall, our y coordinates are b (prevY) and d (currY). For the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis. For the fourth line, b' = h - b and d' = h - d, which we’ve seen before: that’s a reflection across the x-axis. We have four more lines, or locations, to define. Note: the part of the code that’s responsible for drawing a micro-line between the newly calculated coordinates is always the same: context.moveTo(a_, b_); context.lineTo(c_, d_); We can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments. Once again, we need some concrete examples to see where we’re going, so here’s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code. A sketch illustrating points E and F. This is the code for E and F: // Reassign for 5 a_ = w/2+h/2-b; b_ = w/2+h/2-a; c_ = w/2+h/2-d; d_ = w/2+h/2-c; // Reassign for 6 a_ = w/2+h/2-b; b_ = h/2-w/2+a; c_ = w/2+h/2-d; d_ = h/2-w/2+c; Their x coordinates are identical and their y coordinates are reversed to one another. This one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3). A sketch illustrating points G and H. This is the code: // Reassign for 7 a_ = w/2-h/2+b; b_ = w/2+h/2-a; c_ = w/2-h/2+d; d_ = w/2+h/2-c; // Reassign for 8 a_ = w/2-h/2+b; b_ = h/2-w/2+a; c_ = w/2-h/2+d; d_ = h/2-w/2+c; //...code... } Once again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won’t go into the full details, since this has been a long enough journey as it is, and I think we’ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them. I hope you had fun learning! This is our final app: See the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen. 2018 Hagar Shilo hagarshilo 2018-12-02T00:00:00+00:00 https://24ways.org/2018/the-art-of-mathematics/ code
259 Designing Your Future I’ve had the pleasure of working for a variety of clients – both large and small – over the last 25 years. In addition to my work as a design consultant, I’ve worked as an educator, leading the Interaction Design team at Belfast School of Art, for the last 15 years. In July, 2018 – frustrated with formal education, not least the ever-present hand of ‘austerity’ that has ravaged universities in the UK for almost a decade – I formally reduced my teaching commitment, moving from a full-time role to a half-time role. Making the move from a (healthy!) monthly salary towards a position as a freelance consultant is not without its challenges: one month your salary’s arriving in your bank account (and promptly disappearing to pay all of your bills); the next month, that salary’s been drastically reduced. That can be a shock to the system. In this article, I’ll explore the challenges encountered when taking a life-changing leap of faith. To help you confront ‘the fear’ – the nervousness, the sleepless nights and the ever-present worry about paying the bills – I’ll provide a set of tools that will enable you to take a leap of faith and pursue what deep down drives you. In short: I’ll bare my soul and share everything I’m currently working on to – once and for all – make a final bid for freedom. This isn’t easy. I’m sharing my innermost hopes and aspirations, and I might open myself up to ridicule, but I believe that by doing so, I might help others, by providing them with tools to help them make their own leap of faith. The power of visualisation As designers we have skills that we use day in, day out to imagine future possibilities, which we then give form. In our day-to-day work, we use those abilities to design products and services, but I also believe we can use those skills to design something every bit as important: ourselves. In this article I’ll explore three tools that you can use to design your future: Product DNA Artefacts From the Future Tomorrow Clients Each of these tools is designed to help you visualise your future. By giving that future form, and providing a concrete goal to aim for, you put the pieces in place to make that future a reality. Brian Eno – the noted musician, producer and thinker – states, “Humans are capable of a unique trick: creating realities by first imagining them, by experiencing them in their minds.” Eno helpfully provides a powerful example: When Martin Luther King said, “I have a dream,” he was inviting others to dream that dream with him. Once a dream becomes shared in that way, current reality gets measured against it and then modified towards it. The dream becomes an invisible force which pulls us forward. By this process it starts to come true. The act of imagining something makes it real. When you imagine your future – designing an alternate, imagined reality in your mind – you begin the process of making that future real. Product DNA The first tool, which I use regularly – for myself and for client work – is a tool called Product DNA. The intention of this tool is to identify beacons from which you can learn, helping you to visualise your future. We all have heroes – individuals or organisations – that we look up to. Ask yourself, “Who are your heroes?” If you had to pick three, who would they be and what could you learn from them? (You probably have more than three, but distilling down to three is an exercise in itself.) Earlier this year, when I was putting the pieces in place for a change in career direction, I started with my heroes. I chose three individuals that inspired me: Alan Moore: the author of ‘Do Design: Why Beauty is Key to Everything’; Mark Shayler: the founder of Ape, a strategic consultancy; and Seth Godin: a writer and educator I’ve admired and followed for many years. Looking at each of these individuals, I ‘borrowed’ a little DNA from each of them. That DNA helped me to paint a picture of the kind of work I wanted to do and the direction I wanted to travel. Moore’s book - ‘Do Design’ – had a powerful influence on me, but the primary inspiration I drew from him was the sense of gravitas he conveyed in his work. Moore’s mission is an important one and he conveys that with an appropriate weight of expression. Shayler’s work appealed to me for its focus on equipping big businesses with a startup mindset. As he puts it: “I believe that you can do the things that you do better.” That sense – of helping others to be their best selves – appealed to me. Finally, the words Godin uses to describe himself – “An Author, Entrepreneur and Most of All, a Teacher” – resonated with me. The way he positions himself, as, “most of all, a teacher,” gave me the belief I needed that I could work as an educator, but beyond the ivory tower of academia. I’ve been exploring each of these individuals in depth, learning from them and applying what I learn to my practice. They don’t all know it, but they are all ‘mentors from afar’. In a moment of serendipity – and largely, I believe, because I’d used this tool to explore his work – I was recently invited by Alan Moore to help him develop a leadership programme built around his book. The key lesson here is that not only has this exercise helped me to design my future and give it tangible form, it’s also led to a fantastic opportunity to work with Alan Moore, a thinker who I respect greatly. Artefacts From the Future The second tool, which I also use regularly, is a tool called ‘Artefacts From the Future’. These artefacts – especially when designed as ‘finished’ pieces – are useful for creating provocations to help you see the future more clearly. ‘Artefacts From the Future’ can take many forms: they might be imagined magazine articles, news items, or other manifestations of success. By imagining these end points and giving them form, you clarify your goals, establishing something concrete to aim for. Earlier this year I revisited this tool to create a provocation for myself. I’d just finished Alla Kholmatova’s excellent book on ‘Design Systems’, which I would recommend highly. The book wasn’t just filled with valuable insights, it was also beautifully designed. Once I’d finished reading Kholmatova’s book, I started thinking: “Perhaps it’s time for me to write a new book?” Using the magic of ‘Inspect Element’, I created a fictitious page for a new book I wanted to write: ‘Designing Delightful Experiences’. I wrote a description for the book, considering how I’d pitch it. This imagined page was just what I needed to paint a picture in my mind of a possible new book. I contacted the team at Smashing Magazine and pitched the idea to them. I’m happy to say that I’m now working on that book, which is due to be published in 2019. Without this fictional promotional page from the future, the book would have remained as an idea – loosely defined – rolling around my mind. By spending some time, turning that idea into something ‘real’, I had everything I needed to tell the story of the book, sharing it with the publishing team at Smashing Magazine. Of course, they could have politely informed me that they weren’t interested, but I’d have lost nothing – truly – in the process. As designers, creating these imaginary ‘Artefacts From the Future’ is firmly within our grasp. All we need to do is let go a little and allow our imaginations to wander. In my experience, working with clients and – to a lesser extent, students – it’s the ‘letting go’ part that’s the hard part. It can be difficult to let down your guard and share a weighty goal, but I’d encourage you to do so. At the end of the day, you have nothing to lose. The key lesson here is that your ‘Artefacts From the Future’ will focus your mind. They’ll transform your unformed ideas into ‘tangible evidence’ of future possibilities, which you can use as discussion points and provocations, helping you to shape your future reality. Tomorrow Clients The third tool, which I developed more recently, is a tool called ‘Tomorrow Clients’. This tool is designed to help you identify a list of clients that you aspire to work with. The goal is to pinpoint who you would like to work with – in an ideal world – and define how you’d position yourself to win them over. Again, this involves ‘letting go’ and allowing your mind to imagine the possibilities, asking, “What if…?” Before I embarked upon the design of my new website, I put together a ‘soul searching’ document that acted as a focal point for my thinking. I contacted a number of designers for a second opinion to see if my thinking was sound. One of my graduates – Chris Armstrong, the founder of Niice – replied with the following: “Might it be useful to consider five to ten companies you’d love to work for, and consider how you’d pitch yourself to them?” This was just the provocation I needed. To add a little focus, I reduced the list to three, asking: “Who would my top three clients be?” By distilling the list down I focused on who I’d like to work for and how I’d position myself to entice them to work with me. My list included: IDEO, Adobe and IBM. All are companies I admire and I believed each would be interesting to work for. This exercise might – on the surface – appear a little like indulging in fantasy, but I believe it helps you to clarify exactly what it is you are good at and, just as importantly, put that in to words. For each company, I wrote a short pitch outlining why I admired them and what I thought I could add to their already existing skillset. Focusing first on Adobe, I suggested establishing an emphasis on educational resources, designed to help those using Adobe’s creative tools to get the most out of them. A few weeks ago, I signed a contract with the team working on Adobe XD to create a series of ‘capsule courses’, focused on UX design. The first of these courses – exploring UI design – will be out in 2019. I believe that Armstrong’s provocation – asking me to shift my focus from clients I have worked for in the past to clients I aspire to work for in the future – made all the difference. The key lesson here is that this exercise encouraged me to raise the bar and look to the future, not the past. In short, it enabled me to proactively design my future. In closing… I hope these three tools will prove a welcome addition to your toolset. I use them when working with clients, I also use them when working with myself. I passionately believe that you can design your future. I also firmly believe that you’re more likely to make that future a reality if you put some thought into defining what it looks like. As I say to my students and the clients I work with: It’s not enough to want to be a success, the word ‘success’ is too vague to be a destination. A far better approach is to define exactly what success looks like. The secret is to visualise your future in as much detail as possible. With that future vision in hand as a map, you give yourself something tangible to translate into a reality. 2018 Christopher Murphy christophermurphy 2018-12-15T00:00:00+00:00 https://24ways.org/2018/designing-your-future/ process
258 Mistletoe Offline It’s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season. I am, of course, talking about Murphy, and the golden rule he gave unto us: Anything that can go wrong will go wrong. So true! I mean, that’s why we make sure we’ve got nice 404 pages. It’s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it’s bound to happen sometime. Murphy’s Law, innit? But there are some Murphyesque situations where even your lovingly crafted 404 page won’t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can—and will—go wrong. I guess there’s nothing we can do about those particular situations, right? Wrong! A service worker is a Murphy-battling technology that you can inject into a visitor’s device from your website. Once it’s installed, it can intercept any requests made to your domain. If anything goes wrong with a request—as is inevitable—you can provide instructions for the browser. That’s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade. If you’ve got a custom 404 page, why not make a custom offline page too? Get your server in order Step one is to make …actually, wait. There’s a step before that. Step zero. Get your site running on HTTPS, if it isn’t already. You won’t be able to use a service worker unless everything’s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields. If you’re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must. Make an offline page Alright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here’s an example of the custom offline page for this year’s Ampersand conference. When you’re done, publish the offline page at suitably imaginative URL, like, say /offline.html. Pre-cache your offline page Now create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user’s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener: addEventListener('install', installEvent => { // put your instructions here. }); // end addEventListener In this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I’m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code: addEventListener('install', installEvent => { installEvent.waitUntil( caches.open('Johnny') .then( JohnnyCache => { JohnnyCache.addAll([ '/offline.html' ]); // end addAll }) // end open.then ); // end waitUntil }); // end addEventListener I’m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point: addEventListener('install', installEvent => { installEvent.waitUntil( caches.open('Johnny') .then( JohnnyCache => { JohnnyCache.addAll([ '/offline.html', '/path/to/stylesheet.css', '/path/to/javascript.js', '/path/to/image.jpg' ]); // end addAll }) // end open.then ); // end waitUntil }); // end addEventListener Make sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached. Intercept requests The next event you want to listen for is the fetch event. This is probably the most powerful—and, let’s be honest, the creepiest—feature of a service worker. Once it has been installed, the service worker lurks on the user’s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time: addEventListener('fetch', fetchEvent => { // What happens next is up to you! }); // end addEventListener Let’s write a fairly conservative script with the following logic: Whenever a file is requested, First, try to fetch it from the network, But if that doesn’t work, try to find it in the cache, But if that doesn’t work, and it’s a request for a web page, show the custom offline page instead. Here’s how that translates into JavaScript: // Whenever a file is requested addEventListener('fetch', fetchEvent => { const request = fetchEvent.request; fetchEvent.respondWith( // First, try to fetch it from the network fetch(request) .then( responseFromFetch => { return responseFromFetch; }) // end fetch.then // But if that doesn't work .catch( fetchError => { // try to find it in the cache caches.match(request) .then( responseFromCache => { if (responseFromCache) { return responseFromCache; // But if that doesn't work } else { // and it's a request for a web page if (request.headers.get('Accept').includes('text/html')) { // show the custom offline page instead return caches.match('/offline.html'); } // end if } // end if/else }) // end match.then }) // end fetch.catch ); // end respondWith }); // end addEventListener I am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what’s happening at each point in the code, I’ve written a whole book for you. It’s the perfect present for Murphymas. Hook up your service worker script You can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you’re calling in to every page on your site, or add this in a script element at the end of every page’s HTML: if (navigator.serviceWorker) { navigator.serviceWorker.register('/serviceworker.js'); } That tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend. You might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don’t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you’re making sure that every request can be intercepted. Go further! Nicely done! You’ve made sure that if—no, when—a visitor can’t reach your website, they’ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe! This is just the beginning. You can do more with service workers. What if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they’re offline, you could show them the cached version. Or, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version—which would be blazingly fast—and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images. So many options! The hard part isn’t writing the code, it’s figuring out the steps you want to take. Once you’ve got those steps written out, then it’s a matter of translating them into JavaScript. Inevitably there will be some obstacles along the way—usually it’s a misplaced curly brace or a missing parenthesis. Don’t be too hard on yourself if your code doesn’t work at first. That’s just Murphy’s Law in action. 2018 Jeremy Keith jeremykeith 2018-12-04T00:00:00+00:00 https://24ways.org/2018/mistletoe-offline/ code
257 The (Switch)-Case for State Machines in User Interfaces You’re tasked with creating a login form. Email, password, submit button, done. “This will be easy,” you think to yourself. Login form by Selecto You’ve made similar forms many times in the past; it’s essentially muscle memory at this point. You’re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you’ll have to translate the pixels to meaningful, responsive CSS values, but that’s the least of your problems. As you’re writing up the HTML structure and CSS layout and styles for this form, you realize that you don’t know what the successful “logged in” page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work. What if login fails? Where do those errors show up? Should we show errors differently if the user forgot to enter their email, or password, or both? Or should the submit button be disabled? Should we validate the email field? When should we show validation errors – as they’re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.) When should the errors disappear? What do we show during the login process? Some loading spinner? What if loading takes too long, or a server error occurs? Many more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases. Modeling Behavior Describing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story – they often leave out potential edge-cases or small yet important bits of information. However, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine. The general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states: start - not submitted yet loading - submitted and logging in success - successfully logged in error - login failed Additionally, we can describe an application as accepting a finite number of events – that is, all the possible events that can be “sent” to the application, either from the user or some other external entity: SUBMIT - pressing the submit button RESOLVE - the server responds, indicating that login is successful REJECT - the server responds, indicating that login failed Then, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be: From the start state, when the SUBMIT event occurs, the app should be in the loading state. From the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state. If login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state. From the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state. Otherwise, if any other event occurs, don’t do anything and stay in the same state. That’s a pretty thorough description, similar to a user story! It’s also a bit more symbolic than a user story (e.g., “when the SUBMIT event occurs” instead of “when the user presses the submit button”), and that’s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like: Every state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event. From Visuals to Code Drawing a state machine doesn’t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn’t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application. With the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements: function loginMachine(state, event) { switch (state) { case 'start': if (event === 'SUBMIT') { return 'loading'; } break; case 'loading': if (event === 'RESOLVE') { return 'success'; } else if (event === 'REJECT') { return 'error'; } break; case 'success': // Accept no further events break; case 'error': if (event === 'SUBMIT') { return 'loading'; } break; default: // This should never occur return undefined; } } console.log(loginMachine('start', 'SUBMIT')); // => 'loading' This is fine (I suppose) but personally, I find it much easier to use objects: const loginMachine = { initial: "start", states: { start: { on: { SUBMIT: 'loading' } }, loading: { on: { REJECT: 'error', RESOLVE: 'success' } }, error: { on: { SUBMIT: 'loading' } }, success: {} } }; function transition(state, event) { return machine .states[state] // Look up the state .on[event] // Look up the next state based on the event || state; // If not found, return the current state } console.log(transition('start', 'SUBMIT')); As you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here: A Common Language Between Designers and Developers Although finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can: identify all possible states, and potentially missing states describe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?) eliminate impossible states and identify states that are “unreachable” (have no entry transition) or “sunken” (have no exit transition) add features with full confidence of knowing what other states it might affect simplify redundant states or complex user flows create test paths for almost every possible user flow, and easily identify edge cases collaborate better by understanding the entire application model equally. Not a New Idea I’m not the first to suggest that state machines can help bridge the gap between design and development. Vince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines User flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them. In 1999, Ian Horrocks wrote a book titled “Constructing the User Interface with Statecharts”, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today. More than a decade earlier, David Harel published “Statecharts: A Visual Formalism for Complex Systems”, in which the statechart - an extended hierarchical state machine model - is born. State machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits: Visualized modeling Precise diagrams Automatic code generation Comprehensive test coverage Accommodation of late-breaking requirements changes Moving Forward It’s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources: The World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications The Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling I gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases. I have also been working on XState, which is a library for “state machines and statecharts for the modern web”. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages). I’m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience. 2018 David Khourshid davidkhourshid 2018-12-12T00:00:00+00:00 https://24ways.org/2018/state-machines-in-user-interfaces/ code
256 Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist We’re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we’re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month. *(a Naturalist is someone who likes learning about nature, not someone who’s a fan of being naked, that’s a ‘Naturist’… different thing!) Looking for critters in rocky intertidal habitats One of my favourite things to do is going rockpooling, or as we call it over here in California, ‘tidepooling’. Amounting to the same thing, it’s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this ‘rocky intertidal habitat’ A particularly exciting creature that lives here is the Nudibranch, a type of super colourful ‘sea slug’. There are over 3000 species of Nudibranch worldwide. (The word “nudibranch” comes from the Latin nudus, naked, and the Greek βρανχια / brankhia, gills.) ​ They are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons. My favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition ‘Mavericks’). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible. ​ I was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn’t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist. Using technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower! Finding nearby animals with iNaturalist We’re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society. I’ve been using iNaturalist for over two years to record and identify plants and animals that I’ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I’ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting. This process is great because once an observation has been identified by at least two people it becomes ‘verified’ and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF. ​ iNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the ‘Try it out’ button. ​ You’ll then get a URL that looks a bit like https://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at which you can call and interrrogate using a programming language of your choice. If you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season). Rapid development using Observable Notebooks We’re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn’t require any setup at all. You can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example. Each ‘notebook’ consists of a page with a column of ‘cells’, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock. Developing your Naturalist superpowers If you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you. For our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in. We could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com ​ The tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the ‘DublinCore’ format as it’s closest to the format needed by the iNaturalist API. westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811 A quick map primer: The higher the latitude the more north it is The lower the latitude the more south it is Latitude 0 = the equator The higher the longitude the more east it is of Greenwich The lower the longitude the more west it is of Greenwich Longitude 0 = Greenwich In the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California: nelat = highest latitude = north limit = 37.499811 nelng = highest longitude = east limit = -122.492738 swlat = smallest latitude = south limit = 37.492805 swlng = smallest longitude = west limit = 122.50542 As API parameters these look like this: ?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542 These parameters in this format can be used for most of the iNaturalist API methods. Nudibranch seasonality in Pillar Point We can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box. In addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent ‘Order Nudibranchia’. Another useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg ‘Glaucus Atlanticus’ the first Latin word is the ‘Genus’ like a family name ‘Glaucus’, and the second word identifies that particular species, like a given name ‘Atlanticus’. The two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead. The following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box: pillar_point_counts_per_week = fetch( "https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true" ).then(response => { return response.json(); }) Our next step is to take this data and draw a graph! We’ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks. (Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite) The iNaturalist API returns data that looks like this: { "total_results": 53, "page": 1, "per_page": 53, "results": { "week_of_year": { "1": 136, "2": 20, "3": 150, "4": 65, "5": 186, "6": 74, "7": 47, "8": 87, "9": 64, "10": 56, But for our Vega-Lite graph we need data that looks like this: [{ "week": "01", "value": 136 }, { "week": "02", "value": 20 }, ...] We can convert what we get back from the API to the second format using a loop that iterates over the object keys: objects_to_plot = { let objects = []; Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) { objects.push({ week: `Wk ${week_index.toString()}`, observations: pillar_point_counts_per_week.results.week_of_year[week_index] }); }) return objects; } We can then plug this into Vega-Lite to draw us a graph: vegalite({ data: {values: objects_to_plot}, mark: "bar", encoding: { x: {field: "week", type: "nominal", sort: null}, y: {field: "observations", type: "quantitative"} }, width: width * 0.9 }) It’s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences. So, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names. pillar_point_nudibranches = { let api_results = await fetch( "https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true" ).then(r => r.json()) let species_list = api_results.results.map(i => ({ value: i.taxon.id, label: `${i.taxon.preferred_common_name} (${i.taxon.name})` })); return species_list } We can create an interactive select box by importing code from Jeremy Ashkanas’ Observable Notebook: add import {select} from "@jashkenas/inputs" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn’t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another! viewof select_species = select({ title: "Which Nudibranch do you want to see seasonality for?", options: [{value: "", label: "All the Nudibranchs!"}, ...pillar_point_nudibranches], value: "" }) Then we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113. pillar_point_counts_per_month_per_species = fetch( `https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true` ).then(r => r.json()) Now for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite: objects_to_plot_species_month = { let objects = []; Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) { objects.push({ month: (new Date(2018, (month_index - 1), 1)).toLocaleString("en", {month: "long"}), observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index] }); }) return objects; } (Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month) And we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update! vegalite({ data: {values: objects_to_plot_species_month}, mark: "bar", encoding: { x: {field: "month", type: "nominal", sort:null}, y: {field: "observations", type: "quantitative"} }, width: width * 0.9 }) Now we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch. ​ This tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs? Well… we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year! Our select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month. viewof selected_month = select({ title: "When do you want to see Nudibranchs?", options: [ { label: "Whenever", value: "" }, { label: "January", value: "1" }, { label: "February", value: "2" }, { label: "March", value: "3" }, { label: "April", value: "4" }, { label: "May", value: "5" }, { label: "June", value: "6" }, { label: "July", value: "7" }, { label: "August", value: "8" }, { label: "September", value: "9" }, { label: "October", value: "10" }, { label: "November", value: "11" }, { label: "December", value: "12" }, ], value: "" }) We then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We’ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name. all_species_data = fetch( `https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true` ).then(r => r.json()) You can render HTML directly in a notebook cell using Observable’s html tagged template literal: <style> .collection { margin-top: 2em } .collection .species { display: inline-block; width: 9em; margin-bottom: 2em; } .collection .species-name { font-size: 1em; margin: 0; padding: 0 } .collection .species-count { margin: 0 0 0.3em 0; padding: 0; font-size: 0.75em; color: #999; font-style: italic; } .collection img { display: block; width: 100% } .collection select { font-size: 1.5em; } </style> <h2>If you go to Pillar Point ${ {"": "", "1":"in January", "2":"in Febrary", "3":"in March", "4":"in April", "5":"in May", "6":"in June", "7":"in July", "8":"in August", "9":"in September", "10":"in October", "11":"in November", "12":"in December", }[selected_month] } you might see…</h2> <div class="collection"> ${all_species_data.results.map(s => `<div class="species"><h3 class="species-name">${s.taxon.name}</h3> <p class="species-count">Seen ${s.count} times</p> <img src="${s.taxon.default_photo.medium_url}"></div> `)} </div> These few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month! ​ Play with it yourself in this Observable Notebook. Conclusion I hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new ‘knowledge of nature’ superpower. Lastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there. Here is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences. 2018 Natalie Downe nataliedowne 2018-12-18T00:00:00+00:00 https://24ways.org/2018/observable-notebooks-and-inaturalist/ code
255 Inclusive Considerations When Restyling Form Controls I would like to begin by saying 2018 was the year that we, as developers, visual designers, browser implementers, and inclusive design and experience specialists rallied together and achieved a long-sought goal: We now have the ability to fully style form controls, across all modern browsers, while retaining their ease of declaration, native functionality and accessibility. I would like to begin by saying all these things. However, they’re not true. I think we spent the year debating about what file extension CSS should be written in, or something. Or was that last year? Maybe I’m thinking of next year. Returning to reality, styling form controls is more tricky and time consuming these days rather than flat out “hard”. In fact, depending on the length of the styling-leash a particular browser provides, there are controls you can style quite a bit. As for browsers with shorter leashes, there are other options to force their controls closer to the visual design you’re tasked to match. However, when striving for custom styled controls, one must be careful not to forget about the inherent functionality and accessibility that many provide. People expect and deserve the products and services they use and pay for to work for them. If these services are visually pleasing, but only function for those who fit the handful of personas they’ve been designed for, then we’ve potentially deprived many people the experiences they deserve. Quick level setting Getting down to brass tacks, when creating custom styled form controls that should retain their expected semantics and functionality, we have to consider the following: Many form elements can be styled directly through standard and browser specific selectors, as well as through some clever styling of markup patterns. We should leverage these native options before reinventing any wheels. It is important to preserve the underlying semantics of interactive controls. We must not unintentionally exclude people who use assistive technologies (ATs) that rely on these semantics. Make sure you test what you create. There is a lot of underlying complexity to form controls which may not be immediately apparent if they’re judged solely by their visual presentation in a single browser, or with limited AT testing. Visually resetting and restyling form controls Over the course of 2018, I worked on a project where I tested and reported on the accessibility impact of styling various form controls. In conducting my research, I reviewed many of the form controls available in HTML, testing to see how malleable they were to direct styling from standardized CSS selectors. As I expected, controls such as the various text fields could be restyled rather easily. However, other controls like radio buttons and checkboxes, or sub-elements of special text fields like date, search, and number spinners were resistant to standard-based styling. These particular controls and their sub-elements required specific pseudo-elements to reset and allow for restyling of some of their default presentation. See the Pen form control styling comparisons by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/gZOrZm/ Over the years, the ability to directly style form controls has been something many people have clamored for. However, one should realize the benefits of being able to restyle some of these controls may involve more effort than originally anticipated. If you want to restyle a control from the ground up, then you must also recreate any :active, :focus, and :hover states for the control—all those things that were previously taken care of by browsers. Not only that, but anything you restyle should also work with Windows High Contrast mode, styling for dark mode, and other OS-level settings that browser respect without you even realizing. You ever try playing with the accessibility settings of your display on macOS, or similar Windows setting? It is also worth mentioning that any browser prefixed pseudo-elements are not standardized CSS selectors. As MDN mentions at the top of their pages documenting these pseudo-elements: Non-standard This feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future. While this may be a deterrent for some, it’s my opinion the risks are often only skin-deep. By which I mean if a non-standard selector does change, the control may look a bit quirky, but likely won’t cease to function. A bug report which requires a CSS selector change can be an easy JIRA ticket to close, after all. Can’t make it? Fake it. Internet Explorer 11 (IE11) is still neck-and-neck with other browsers in vying for the number 2 spot in desktop browser share. Due to IE not recognizing vendor-prefixed appearance properties, some essential controls like checkboxes won’t render as intended. Additionally, some controls like select boxes, file uploads, and sub-elements of date fields (calendar popups) cannot be modified by just relying on styling their HTML selectors alone. This means that unless your company designs and develops with a progressive enhancement, or graceful degradation mindset, you’ll need to take a different approach in styling. Getting clever with markup and CSS The following CodePen demonstrates how we can create a custom checkbox markup pattern. By mindfully utilizing CSS sibling selectors and positioning of the native control, we can create custom visual styling while also retaining the functionality and accessibility expectations of a native checkbox. See the Pen Accessible Styled Native Checkbox by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/RqEayN/ Customizing checkboxes by visually hiding the input and styling well-placed markup with sibling selectors may seem old hat to some. However, many variations of these patterns do not take into account how their method of visually hiding the checkboxes can create discovery issues for certain screen reader navigation methods. For instance, if someone is using a mobile device and exploring by touch, how will they be able to drag their finger over an input that has been reduced to a single pixel, or positioned off screen? As we move away from the simplicity of declaring a single HTML element and using clever CSS and markup patterns to create restyled form controls, we increase the need for additional testing to ensure no expected behaviors are lost. In other words, what should work in theory may not work in practice when you introduce the various different ways people may engage with a form control. It’s worth remembering: what might be typical interactions for ourselves may be problematic if not impossible for others. Limitations to cleverness Creative coding will allow us to apply more consistent custom styles to some of the more problematic form controls. There will be a varied amount of custom markup, CSS, and sometimes JavaScript that will be needed to preserve the control’s inherent usability and accessibility for each control we take this approach to. However, this method of restyling still doesn’t solve for the lack of feature parity across different browsers. Nor is it a means to account for controls which don’t have a native HTML element equivalent, such as a switch or multi-thumb range slider? Maybe there’s a control that calls for a visual design or proposed user experience that would require too much fighting with a native control’s behavior to be worth the level of effort to implement. Here’s where we need to take another approach. Using ARIA when appropriate Sometimes we have no other option than to roll up our sleeves and start building custom form controls from scratch. Fair warning though: just because we’re not leveraging a native HTML control as our foundation, it doesn’t mean we have carte blanche to throw semantics out the window. Enter Accessible Rich Internet Applications (ARIA). ARIA is a set of attributes that can modify existing elements, or extend HTML to include roles, properties and states that aren’t native to the language. While divs and spans have no meaningful semantic information for us to leverage, with help from the ARIA specification and ARIA Authoring Practices we can incorporate these elements to help create the UI that we need while still following the first rule of Using ARIA: If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so. By using these documents as guidelines, and testing our custom controls with people of various abilities, we can do our best to make sure a custom control performs as expected for as many people as possible. Exceptions to the rule One example of a control that allows for an exception to the first rule of Using ARIA would be a switch control. Switches and checkboxes are similar components, in that they have both on/checked and off/unchecked states. However, checkboxes are often expected within the context of forms, or used to filter search queries on e-commerce sites. Switches are typically used to instantly enable or deactivate a particular setting at a component or app-based level, as this is their behavior in the native mobile apps in which they were popularized. While a switch control could be created by visually restyling a checkbox, this does not automatically mean that the underlying semantics and functionality will match the visual representation of the control. For example, the following CodePen restyles checkboxes to look like a switch control, but the semantics of the checkboxes remain which communicate a different way of interacting with the control than what you might expect from a native switch control. See the Pen Switch Boxes - custom styled checkboxes posing as switches by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/XyvoeE/ By adding a role="switch" to these checkboxes, we can repurpose the inherent checked/unchecked states of the native control, it’s inherent ability to be focused by Tab key, and Space key to toggle state. But while this is a valid approach to take in building a switch, how does this actually match up to reality? Does it pass the test(s)? Whether deconstructing form controls to fully restyle them, or leveraging them and other HTML elements as a base to expand on, or create, a non-native form control, building it is just the start. We must test that what we’ve restyled or rebuilt works the way people expect it to, if not better. What we must do here is run a gamut of comparative tests to document the functionality and usability of native form controls. For example: Is the control implemented in all supported browsers? If not: where are the gaps? Will it be necessary to implement a custom solution for the situations that degrade to a standard text field? If so: is each browser’s implementation a good user experience? Is there room for improvement that can be tested against the native baseline? Test with multiple input devices. Where the control is implemented, what is the quality of the user experience when using different input devices, such as mouse, touchscreen, keyboard, speech recognition or switch device, to name a few. You’ll find some HTML5 controls (like date pickers and number spinners) have additional UI elements that may not be announced to AT, or even allow keyboard accessibility. Often these controls can be adjusted by other means, such as text entry, or using arrow keys to increase or decrease values. If restyling or recreating a custom version of a control like these, it may make sense to maintain these native experiences as well. How well does the control take to custom styles? If a control can be styled enough to not need to be rebuilt from scratch, that’s great! But make sure that there are no adverse affects on the accessibility of it. For instance, range sliders can be restyled and maintain their functionality and accessibility. However, elements like progress bars can be negatively affected by direct styling. Always test with different browser and AT pairings to ensure nothing is lost when controls are restyled. Do specifications match reality? If recreating controls to get around native limitations, such as the inability to style the options of a select element, or requiring a Switch control which is not native to HTML, do your solutions match user expectations? For instance, selects have unique picker interfaces on touch devices. And switches have varied levels of support for different browser and screen reader pairings. Test with real people, and check your analytics. If these experiences don’t match people’s expectations, then maybe another solution is in order? Wrapping up While styling form controls is definitely easier than it’s ever been, that doesn’t mean that it’s at all simple, nor will it likely ever be. The level of difficulty you’re going to face is going to depend entirely on what it is you’re hoping to style, add-on to, or recreate. And even if you build your custom control exactly to specification, you’ll still be reliant on browsers and assistive technologies being able to fully understand the component they’ve been presented. Forms and their controls are an incredibly important part of what we need the Internet for. Paying bills, scheduling appointments, ordering groceries, renewing your license or even ordering gifts for the holidays. These are all important tasks that people should be able to complete with as little effort as possible. Especially since for some, completing these tasks online might be their only option. 2018 didn’t end up being the year we got full customization of form controls sorted out. But that’s OK. If we can continue to mindfully work with what we have, and instead challenge ourselves to follow inclusive design principles, well thought out Form Design Patterns, and solve problems with an accessibility first approach, we may come to realize that we can get along just fine without fully branded drop downs. And hey. There’s always next year, right? 2018 Scott O'Hara scottohara 2018-12-13T00:00:00+00:00 https://24ways.org/2018/inclusive-considerations-when-restyling-form-controls/ code
254 What I Learned in Six Years at GDS When I joined the Government Digital Service in April 2012, GOV.UK was just going into public beta. GDS was a completely new organisation, part of the Cabinet Office, with a mission to stop wasting government money on over-complicated and underperforming big IT projects and instead deliver simple, useful services for the public. Lots of people who were experts in their fields were drawn in by this inspiring mission, and I learned loads from working with some true leaders. Here are three of the main things I learned. 1. What is the user need? 
The main discipline I learned from my time at GDS was to always ask ‘what is the user need?’ It’s very easy to build something that seems like a good idea, but until you’ve identified what problem you are solving for the user, you can’t be sure that you are building something that is going to help solve an actual problem. A really good example of this is GOV.UK Notify. This service was originally conceived of as a status tracker; a “where’s my stuff” for government services. For example, if you apply for a passport online, it can take up to six weeks to arrive. After a few weeks, you might feel anxious and phone the Home Office to ask what’s happening. The idea of the status tracker was to allow you to get this information online, saving your time and saving government money on call centres. The project started, as all GDS projects do, with a discovery. The main purpose of a discovery is to identify the users’ needs. At the end of this discovery, the team realised that a status tracker wasn’t the way to address the problem. As they wrote in this blog post: Status tracking tools are often just ‘channel shift’ for anxiety. They solve the symptom and not the problem. They do make it more convenient for people to reduce their anxiety, but they still require them to get anxious enough to request an update in the first place. What would actually address the user need would be to give you the information before you get anxious about where your passport is. For example, when your application is received, email you to let you know when to expect it, and perhaps text you at various points in the process to let you know how it’s going. So instead of a status tracker, the team built GOV.UK Notify, to make it easy for government services to incorporate text, email and even letter notifications into their processes. Making sure you know your user At GDS user needs were taken very seriously. We had a user research lab on site and everyone was required to spend two hours observing user research every six weeks. Ideally you’d observe users working with things you’d built, but even if they weren’t, it was an incredibly valuable experience, and something you should seek out if you are able to. Even if we think we understand our users very well, it is very enlightening to see how users actually use your stuff. Partly because in technology we tend to be power users and the average user doesn’t use technology the same way we do. But even if you are building things for other developers, someone who is unfamiliar with it will interact with it in a way that may be very different to what you have envisaged. User needs is not just about building things Asking the question “what is the user need?” really helps focus on why you are doing what you are doing. It keeps things on track, and helps the team think about what the actual desired end goal is (and should be). Thinking about user needs has helped me with lots of things, not just building services. For example, you are raising a pull request. What’s the user need? The reviewer needs to be able to easily understand what the change you are proposing is, why you are proposing that change and any areas you need particular help on with the review. Or you are writing an email to a colleague. What’s the user need? What are you hoping the reader will learn, understand or do as a result of your email? 2. Make things open: it makes things better The second important thing I learned at GDS was ‘make things open: it makes things better’. This works on many levels: being open about your strategy, blogging about what you are doing and what you’ve learned (including mistakes), and – the part that I got most involved in – coding in the open. Talking about your work helps clarify it One thing we did really well at GDS was blogging – a lot – about what we were working on. Blogging about what you are working on is is really valuable for the writer because it forces you to think logically about what you are doing in order to tell a good story. If you are blogging about upcoming work, it makes you think clearly about why you’re doing it; and it also means that people can comment on the blog post. Often people had really useful suggestions or clarifying questions. It’s also really valuable to blog about what you’ve learned, especially if you’ve made a mistake. It makes sure you’ve learned the lesson and helps others avoid making the same mistakes. As well as blogging about lessons learned, GOV.UK also publishes incident reports when there is an outage or service degradation. Being open about things like this really engenders an atmosphere of trust and safe learning; which helps make things better. Coding in the open has a lot of benefits In my last year at GDS I was the Open Source Lead, and one of the things I focused on was the requirement that all new government source code should be open. From the start, GDS coded in the open (the GitHub organisation still has the non-intuitive name alphagov, because it was created by the team doing the original Alpha of GOV.UK, before GDS was even formed). When I first joined GDS I was a little nervous about the fact that anyone could see my code. I worried about people seeing my mistakes, or receiving critical code reviews. (Setting people’s mind at rest about these things is why it’s crucial to have good standards around communication and positive behaviour - even a critical code review should be considerately given). But I quickly realised there were huge advantages to coding in the open. In the same way as blogging your decisions makes you think carefully about whether they are good ones and what evidence you have, the fact that anyone in the world could see your code (even if, in practice, they probably won’t be looking) makes everyone raise their game slightly. The very fact that you know it’s open, makes you make it a bit better. It helps with lots of other things as well, for example it makes it easier to collaborate with people and share your work. And now that I’ve left GDS, it’s so useful to be able to look back at code I worked on to remember how things worked. Share what you learn It’s sometimes hard to know where to start with being open about things, but it gets easier and becomes more natural as you practice. It helps you clarify your thoughts and follow through on what you’ve decided to do. Working at GDS when this was a very important principle really helped me learn how to do this well. 3. Do the hard work to make it simple (tech edition) ‘Start with user needs’ and ‘Make things open: it makes things better’ are two of the excellent government design principles. They are all good, but the third thing that I want to talk about is number 4: ‘Do the hard work to make it simple’, and specifically, how this manifests itself in the way we build technology. At GDS, we worked very hard to do the hard work to make the code, systems and technology we built simple for those who came after us. For example, writing good commit messages is taken very seriously. There is commit message guidance, and it was not unusual for a pull request review to ask for a commit message to be rewritten to make a commit message clearer. We worked very hard on making pull requests good, keeping the reviewer in mind and making it clear to the user how best to review it. Reviewing others’ pull requests is the highest priority so that no-one is blocked, and teams have screens showing the status of open pull requests (using fourth wall) and we even had a ‘pull request seal’, a bot that publishes pull requests to Slack and gets angry if they are uncommented on for more than two days. Making it easier for developers to support the site Another example of doing the hard work to make it simple was the opsmanual. I spent two years on the web operations team on GOV.UK, and one of the things I loved about that team was the huge efforts everyone went to to be open and inclusive to developers. The team had some people who were really expert in web ops, but they were all incredibly helpful when bringing me on board as a developer with no previous experience of web ops, and also patiently explaining things whenever other devs in similar positions came with questions. The main artefact of this was the opsmanual, which contained write-ups of how to do lots of things. One of the best things was that every alert that might lead to someone being woken up in the middle of the night had a link to documentation on the opsmanual which detailed what the alert meant and some suggested actions that could be taken to address it. This was important because most of the devs on GOV.UK were on the on-call rota, so if they were woken at 3am by an alert they’d never seen before, the opsmanual information might give them everything they needed to solve it, without the years of web ops training and the deep familiarity with the GOV.UK infrastructure that came with working on it every day. Developers are users too Doing the hard work to make it simple means that users can do what they need to do, and this applies even when the users are your developer peers. At GDS I really learned how to focus on simplicity for the user, and how much better this makes things work. These three principles help us make great things I learned so much more in my six years at GDS. For example, the civil service has a very fair way of interviewing. I learned about the importance of good comms, working late, responsibly and the value of content design. And the real heart of what I learned, the guiding principles that help us deliver great products, is encapsulated by the three things I’ve talked about here: think about the user need, make things open, and do the hard work to make it simple. 2018 Anna Shipman annashipman 2018-12-08T00:00:00+00:00 https://24ways.org/2018/what-i-learned-in-six-years-at-gds/ business
253 Clip Paths Know No Bounds CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance. The basics of a clip path Before we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed. Using the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely’s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element’s dimensions. See the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen. So for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines. clip-path: polygon( 30% 0%, 70% 0%, 100% 30%, 100% 70%, 70% 100%, 30% 100%, 0% 70%, 0% 30% ); A shape with less vertices than the eye can see It’s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box — or more specifically when we think outside the range of 0% - 100%. Our element’s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element. See the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen. By going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons. Our earlier octagon can similarly be made with only four points. See the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen. Multiple shapes, one clip path We can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon(). See the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen. Depending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element’s box, we can draw extra lines to help us get where we need to go next as needed. It can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles. See the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen. Different shapes with fill rules A polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification — the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape. See the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen. As lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path’s lines, the point is considered outside, and if it crosses an odd number the point is inside. Order of operations It is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more. These compositing effects are applied in the order: CSS Filters (e.g. filter: blur(2px)) Clipping (e.g. what this article is about) Masking (Clipping’s cousin) Blend Modes (e.g. mix-blend-mode: multiply) Opacity This means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element’s box edges. See the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen. If we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally. Revealing content with animation CSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values. See the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen. Don’t keep CSS Shapes in a box Clip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful. 2018 Dan Wilson danwilson 2018-12-20T00:00:00+00:00 https://24ways.org/2018/clip-paths-know-no-bounds/ code
252 Turn Jekyll up to Eleventy Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper — and more secure — option. The JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it’s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013. By combining three approachable languages — Markdown for content, YAML for data and Liquid for templating — the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it’s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself “this, but in Node”. Thankfully, one of Santa’s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree. Introducing Eleventy Eleventy is a more flexible alternative Jekyll. Besides being written in Node, it’s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains). As content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I’ve discovered, there are a few gotchas. If you’ve been considering making the switch, here are a few tips and tricks to help you on your way1. Note: Throughout this article, I’ll be converting Matt Cone’s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory: git clone https://github.com/mattcone/markdown-guide.git cd markdown-guide Before you start If you’ve used tools like Grunt, Gulp or Webpack, you’ll be familiar with Node.js but, if you’ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now’s the time to install Node.js and set up your project to work with its package manager, NPM: Install Node.js: Mac: If you haven’t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node. Windows: Download the Windows installer from the Node.js website and follow the instructions. Initiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems’s Gemfile, this file contains a list of your project’s third-party dependencies. If you’re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn’t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too. Installing Eleventy With Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency: npm install --save-dev @11ty/eleventy If you open package.json you should see the following: … "devDependencies": { "@11ty/eleventy": "^0.6.0" } … We can now run Eleventy from the command line using NPM’s npx command. For example, to covert the README.md file to HTML, we can run the following: npx eleventy --input=README.md --formats=md This command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition. Configuration Whereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we’ll see later on. We’ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options: module.exports = function(eleventyConfig) { return { dir: { input: "./", // Equivalent to Jekyll's source property output: "./_site" // Equivalent to Jekyll's destination property } }; }; A few other things to bear in mind: Whereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore). By default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you’ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins. Layouts One area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub). Wanting to keep our layouts together, we’ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy’s config: module.exports = function(eleventyConfig) { // Aliases are in relation to the _includes folder eleventyConfig.addLayoutAlias('about', 'layouts/about.html'); eleventyConfig.addLayoutAlias('book', 'layouts/book.html'); eleventyConfig.addLayoutAlias('default', 'layouts/default.html'); return { dir: { input: "./", output: "./_site" } }; } Determining which template language to use Eleventy will transform Markdown (.md) files using Liquid by default, but we’ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do. Variables On the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating. Site variables Alongside build settings, Jekyll let’s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values: title: "Markdown Guide" url: https://www.markdownguide.org baseurl: "" repo: http://github.com/mattcone/markdown-guide comments: false author: name: "Matt Cone" og_locale: "en_US" Eleventy’s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged. { "title": "Markdown Guide", "url": "https://www.markdownguide.org", "baseurl": "", "repo": "http://github.com/mattcone/markdown-guide", "comments": false, "author": { "name": "Matt Cone" }, "og_locale": "en_US" } Page variables The table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace: Jekyll Eleventy page.url page.url page.date page.date page.path page.inputPath page.id page.outputPath page.name page.fileSlug page.content content page.title title page.foobar foobar When iterating through pages, frontmatter values are available via the data object while content is available via templateContent: Jekyll Eleventy item.url item.url item.date item.date item.path item.inputPath item.name item.fileSlug item.id item.outputPath item.content item.templateContent item.title item.data.title item.foobar item.data.foobar Ideally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data. Pagination variables Whereas Jekyll’s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables: Jekyll Eleventy paginator.page pagination.pageNumber paginator.per_page pagination.size paginator.posts pagination.items paginator.previous_page_path pagination.previousPageHref paginator.next_page_path pagination.nextPageHref Filters Although Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few — more than can be covered by this article — but you can replicate them by using Eleventy’s addFilter configuration option. Let’s convert two used by our Markdown Guide: jsonify and where. The jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform: // {{ variable | jsonify }} eleventyConfig.addFilter('jsonify', function (variable) { return JSON.stringify(variable); }); Jekyll’s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match: {{ site.members | where: "graduation_year","2014" }} To account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match: // {{ array | where: key,value }} eleventyConfig.addFilter('where', function (array, key, value) { return array.filter(item => { const keys = key.split('.'); const reducedKey = keys.reduce((object, key) => { return object[key]; }, item); return (reducedKey === value ? item : false); }); }); There’s quite a bit going on within this filter, but I’ll try to explain. Essentially we’re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it’s removed. Phew! Includes As with filters, Jekyll provides a set of tags that aren’t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you’re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this: <!-- page.html --> {% include include.html value="key" %} <!-- include.html --> {{ include.value }} in Eleventy, you would do this: <!-- page.html --> {% include "include.html", value: "key" %} <!-- include.html --> {{ value }} A downside of Shopify’s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments. Tweaking Liquid You may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS’s dynamicPartials option to false. Additionally, Eleventy doesn’t support the include_relative tag, meaning you can’t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option. Thankfully, Eleventy allows us to pass options to LiquidJS: eleventyConfig.setLiquidOptions({ dynamicPartials: false, root: [ '_includes', '.' ] }); Collections Jekyll’s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way. Collections in Jekyll In Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections: collections: - basic-syntax - extended-syntax These correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so: {% for syntax in site.extended-syntax %} {{ syntax.title }} {% endfor %} Collections in Eleventy There are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files: --- title: Strikethrough syntax-id: strikethrough syntax-summary: "~~The world is flat.~~" tag: extended-syntax --- We can then iterate over tagged content like this: {% for syntax in collections.extended-syntax %} {{ syntax.data.title }} {% endfor %} Eleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters): eleventyConfig.addCollection('basic-syntax', collection => { return collection.getFilteredByGlob('_basic-syntax/*.md'); }); eleventyConfig.addCollection('extended-syntax', collection => { return collection.getFilteredByGlob('_extended-syntax/*.md'); }); We can extend this further. For example, say we wanted to sort a collection by the display_order property in our document’s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript’s sort method to sort the result: eleventyConfig.addCollection('example', collection => { return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => { return a.data.display_order - b.data.display_order; }); }); Hopefully, this gives you just a hint of what’s possible using this approach. Using directory data to manage defaults By default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following: --- title: Lists syntax-id: lists api: "no" permalink: /basic-syntax/lists.html --- Again, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables. For example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path: { "layout": "syntax", "tag": "basic-syntax", "permalink": "basic-syntax/{{ title | slug }}.html" } However, Markdown Guide doesn’t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let’s change things around a little. No longer tied to Jekyll’s rules about where collection folders should be saved and how they should be labelled, we’ll move them into a folder called _content: markdown-guide └── _content ├── basic-syntax ├── extended-syntax ├── getting-started └── _content.json We will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published: { "permalink": false } Static files Eleventy only transforms files whose template language it’s familiar with. But often we may have static assets that don’t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true: module.exports = function(eleventyConfig) { … // Copy the `assets` directory to the compiled site folder eleventyConfig.addPassthroughCopy('assets'); return { dir: { input: "./", output: "./_site" }, passthroughFileCopy: true }; } Final considerations Assets Unlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts — we have plenty of choices in that department already. If you’ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly. Publishing to GitHub Pages One of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site — or any site not built with Jekyll — to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It’s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere! Going one louder If you’ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons. While it’s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy’s appeal, it’s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it. I moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that’s perfectly fine! But these go to 11. Information provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5 ↩ 2018 Paul Lloyd paulrobertlloyd 2018-12-11T00:00:00+00:00 https://24ways.org/2018/turn-jekyll-up-to-eleventy/ content
251 The System, the Search, and the Food Bank Imagine a warehouse, half the length of a football field, with a looped conveyer belt down the center. On the belt are plastic bins filled with assortments of shelf-stable food—one may have two bags of potato chips, seventeen pudding cups, and a box of tissues; the next, a dozen cans of beets. The conveyer belt is ringed with large, empty cardboard boxes, each labeled with categories like “Bottled Water” or “Cereal” or “Candy.” Such was the scene at my local food bank a few Saturdays ago, when some friends and I volunteered for a shift sorting donated food items. Our job was to fill the labeled cardboard boxes with the correct items nabbed from the swiftly moving, randomly stocked plastic bins. I could scarcely believe my good fortune of assignments. You want me to sort things? Into categories? For several hours? And you say there’s an element of time pressure? Listen, is there some sort of permanent position I could be conscripted into. Look, I can’t quite explain it: I just know that I love sorting, organizing, and classifying things—groceries at a food bank, but also my bookshelves, my kitchen cabinets, my craft supplies, my dishwasher arrangement, yes I am a delight to live with, why do you ask? The opportunity to create meaning from nothing is at the core of my excitement, which is why I’ve tried to build a career out of organizing digital content, and why I brought a frankly frightening level of enthusiasm to the food bank. “I can’t believe they’re letting me do this,” I whispered in awe to my conveyer belt neighbor as I snapped up a bag of popcorn for the Snacks box with the kind of ferocity usually associated with birds of prey. The jumble of donated items coming into the center need to be sorted in order for the food bank to be able to quantify, package, and distribute the food to those who need it (I sense a metaphor coming on). It’s not just a nice-to-have that we spent our morning separating cookies from carrots—it’s a crucial step in the process. Organization makes the difference between chaos and sense, between randomness and usefulness, whether we’re talking about donated groceries or—there it is—web content. This happens through the magic of criteria matching. In order for us to sort the food bank donations correctly, we needed to know not only the categories we were sorting into, but also the criteria for each category. Does canned ravioli count as Canned Soup? Does enchilada sauce count as Tomatoes? Do protein bars count as Snacks? (Answers: yes, yes, and only if they are under 10 grams of protein or will expire within three months.) Is X a Y? was the question at the heart of our food sorting—but it’s also at the heart of any information-seeking behavior. When we are organizing, or looking for, any kind of information, we are asking ourselves: What is the criteria that defines Y? Does X meet that criteria? We don’t usually articulate it so concretely because it’s a background process, only leaping to consciousness when we encounter a stumbling block. If cans of broth flew by on the conveyer belt, it didn’t require much thought to place them in the Canned Soup box. Boxed broth, on the other hand, wasn’t allowed, causing a small cognitive hiccup—this X is NOT a Y—that sometimes meant having to re-sort our boxes. On the web, we’re interested—I would hope—in reducing cognitive hiccups for our users. We are interested in making our apps easy to use, our websites easy to navigate, our information easy to access. After all, most of the time, the process of using the internet is one of uniting a question with an answer—Is this article from a trustworthy source? Is this clothing the style I want? Is this company paying their workers a living wage? Is this website one that can answer my question? Is X a Y? We have a responsibility, therefore, to make information easy for our users to find, understand, and act on. This means—well, this means a lot of things, and I’ve got limited space here, so let’s focus on these three lessons from the food bank: Use plain, familiar language. This advice seems to be given constantly, but that’s because it’s solid and it’s not followed enough. Your menu labels, page names, and headings need to reflect the word choice of your users. Think how much harder it would have been to sort food if the boxes were labeled according to nutritional content, grocery store aisle number, or Latin name. How much would it slow sorting down if the Tomatoes box were labeled Nightshades? It sounds silly, but it’s not that different from sites that use industry jargon, company lingo, acronyms (oh, yes, I’ve seen it), or other internally focused language when trying to provide wayfinding for users. Choose words that your audience knows—not only will they be more likely to spot what they’re looking for on your site or app, but you’ll turn up more often in search results. Create consistency in all things. Missteps in consistency look like my earlier chicken broth example—changing up how something looks, sounds, or functions creates a moment of cognitive dissonance, and those moments add up. The names of products, the names of brands, the names of files and forms and pages, the names of processes and procedures and concepts—these all need to be consistently spelled, punctuated, linked, and referenced, no matter what section or level the user is in. If submenus are visible in one section, they should be visible in all. If calls-to-action are a graphic button in one section, they are the same graphic button in all. Every affordance, every module, every design choice sets up user expectations; consistency keeps those expectations afloat, making for a smoother experience overall. Make the system transparent. By this, I do not mean that every piece of content should be elevated at all times. The horror. But I do mean that we should make an effort to communicate the boundaries of the digital space from any given corner within. Navigation structures operate just as much as a table of contents as they do a method of moving from one place to another. Page hierarchies help explain content relationships, communicating conceptual relevancy and relative importance. Submenus illustrate which related concepts may be found within a given site section. Take care to show information that conveys the depth and breadth of the system, rather than obscuring it. This idea of transparency was perhaps the biggest challenge we experienced in food sorting. Imagine us volunteers as users, each looking for a specific piece of information in the larger system. Like any new visitor to a website, we came into the system not knowing the full picture. We didn’t know every category label around the conveyer belt, nor what criteria each category warranted. The system wasn’t transparent for us, so we had to make it transparent as we went. We had to stop what we were doing and ask questions. We’d ask staff members. We’d ask more seasoned volunteers. We’d ask each other. We’d make guesses, and guess wrongly, and mess up the boxes, and correct our mistakes, and learn. The more we learned, the easier the sorting became. That is, we were able to sort more quickly, more efficiently, more accurately. The better we understood the system, the better we were at interacting with it. The same is true of our users: the better they understand digital spaces, the more effective they are at using them. But visitors to our apps and websites do not have the luxury of learning the whole system. The fumbling trial-and-error method that I used at the food bank can, on a website, drive users away—or, worse, misinform or hurt them. This is why we must make choices that prioritize transparency, consistency, and familiarity. Our users want to know if X is a Y—well-sorted content can give them the answer. 2018 Lisa Maria Martin lisamariamartin 2018-12-16T00:00:00+00:00 https://24ways.org/2018/the-system-the-search-and-the-food-bank/ content
250 Build up Your Leadership Toolbox Leadership. It can mean different things to different people and vary widely between companies. Leadership is more than just a job title. You won’t wake up one day and magically be imbued with all you need to do a good job at leading. If we don’t have a shared understanding of what a Good Leader looks like, how can we work on ourselves towards becoming one? How do you know if you even could be a leader? Can you be a leader without the title? What even is it? I got very frustrated way back in my days as a senior developer when I was given “advice” about my leadership style; at the time I didn’t have the words to describe the styles and ways in which I was leading to be able to push back. I heard these phrases a lot: you need to step up you need to take charge you need to grab the bull by its horns you need to have thicker skin you need to just be more confident in your leading you need to just make it happen I appreciate some people’s intent was to help me, but honestly it did my head in. WAT?! What did any of this even mean. How exactly do you “step up” and how are you evaluating what step I’m on? I am confident, what does being even more confident help achieve with leading? Does that not lead you down the path of becoming an arrogant door knob? >___< While there is no One True Way to Lead, there is an overwhelming pattern of people in positions of leadership within tech industry being held by men. It felt a lot like what people were fundamentally telling me to do was to be more like an extroverted man. I was being asked to demonstrate more masculine associated qualities (#notallmen). I’ll leave the gendered nature of leadership qualities as an exercise in googling for the reader. I’ve never had a good manager and at the time had no one else to ask for help, so I turned to my trusted best friends. Books. I <3 books I refused to buy into that style of leadership as being the only accepted way to be. There had to be room for different kinds of people to be leaders and have different leadership styles. There are three books that changed me forever in how I approach and think about leadership. Primal leadership, by Daniel Goleman, Richard Boyatzis and Annie McKee Quiet, by Susan Cain Daring Greatly - How the Courage to be Vulnerable transforms the way we live, love, parent and Lead, by Brené Brown I recommend you read them. Ignore the slightly cheesy titles and trust me, just read them. Primal leadership helped to give me the vocabulary and understanding I needed about the different styles of leadership there are, how and when to apply them. Quiet really helped me realise how much I was being undervalued and misunderstood in an extroverted world. If I’d had managers or support from someone who valued introverts’ strengths, things would’ve been very different. I would’ve had someone telling others to step down and shut up for a change rather than pushing on me to step up and talk louder over everyone else. It’s OK to be different and needing different things like time to recharge or time to think before speaking. It also improved my ability to work alongside my more extroverted colleagues by giving me an understanding of their world so I could communicate my needs in a language they would get. Brené Brown’s book I am forever in debt to. Her work gave me the courage to stand up and be my own kind of leader. Even when no-one around me looked or sounded like me, I found my own voice. It takes great courage to be vulnerable and open about what you can and can’t do. Open about your mistakes. Vocalise what you don’t know and asking for help. In some lights, these are seen as weaknesses and many have tried to use them against me, to pull me down and exclude me for talking about them. Dear reader, it did not work, they failed. The truth is, they are my greatest strengths. The privileges I have, I use for good as best and often as I can. Just like gender, leadership is not binary If you google for what a leader is, you’ll get many different answers. I personally think Brené’s version is the best as it is one that can apply to a wider range of people, irrespective of job title or function. I define a leader as anyone who takes responsibility for finding potential in people and processes, and who has the courage to develop that potential. Brené Brown Being a leader isn’t about being the loudest in a room, having veto power, talking over people or ignoring everyone else’s ideas. It’s not about “telling people what to do”. It’s not about an elevated status that you’re better than others. Nor is it about creating a hand wavey far away vision and forgetting to help support people in how to get there. Being a Good Leader is about having a toolbox of leadership styles and skills to choose from depending on the situation. Knowing how and when to apply them is part of the challenge and difficulty in becoming good at it. It is something you will have to continuously work on, forever. There is no Done. Leaders are Made, they are not Born. Be flexible in your leadership style Typically, the best, most effective leaders act according to one or more of six distinct approaches to leadership and skillfully switch between the various styles depending on the situation. From the book, Primal Leadership, it gives a summary of 6 leadership styles which are: Visionary Coaching Affiliative Democratic Pacesetting Commanding Visionary, moves people toward a shared dream or future. When change requires a new vision or a clear direction is needed, using a visionary style of leadership helps communicate that picture. By learning how to effectively communicate a story you can help people to move in that direction and give them clarity on why they’re doing what they’re doing. Coaching, is about connecting what a person wants and helping to align that with organisation’s goals. It’s a balance of helping someone improve their performance to fulfil their role and their potential beyond. Affiliative, creates harmony by connecting people to each other and requires effective communication to aid facilitation of those connections. This style can be very impactful in healing rifts in a team or to help strengthen connections within and across teams. During stressful times having a positive and supportive connection to those around us really helps see us through those times. Democratic, values people’s input and gets commitment through participation. Taking this approach can help build buy-in or consensus and is a great way to get valuable input from people. The tricky part about this style, I find, is that when I gather and listen to everyone’s input, that doesn’t mean the end result is that I have to please everyone. The next two, sadly, are the ones wielded far too often and have the greatest negative impact. It’s where the “telling people what to do” comes from. When used sparingly and in the right situations, they can be a force for good. However, they must not be your default style. Pacesetting, when used well, it is about meeting challenging and exciting goals. When you need to get high-quality results from a motivated and well performing team, this can be great to help achieve real focus and drive. Sadly it is so overused and poorly executed it becomes the “just make it happen” and driver of unrealistic workload which contributes to burnout. Commanding, when used appropriately soothes fears by giving clear direction in an emergency or crisis. When shit is on fire, you want to know that your leadership ability can help kick-start a turnaround and bring clarity. Then switch to another style. This approach is also required when dealing with problematic employees or unacceptable behaviour. Commanding style seems to be what a lot of people think being a leader is, taking control and commanding a situation. It should be used sparingly and only when absolutely necessary. Be responsible for the power you wield If reading through those you find yourself feeling a bit guilty that maybe you overuse some of the styles, or overwhelmed that you haven’t got all of these down and ready to use in your toolbox… Take a breath. Take responsibility. Take action. No one is perfect, and it’s OK. You can start right now working on those. You can have a conversation with your team and try being open about how you’re going to try some different styles. You can be vulnerable and own up to mistakes you might’ve made followed with an apology. You can order those books and read them. Those books will give you more examples on those leadership styles and help you to find your own voice. The impact you can have on the lives of those around you when you’re a leader, is huge. You can help be that positive impact, help discover and develop potential in someone. Time spent understanding people is never wasted. Cate Huston. I believe in you. <3 Mazz. 2018 Mazz Mosley mazzmosley 2018-12-10T00:00:00+00:00 https://24ways.org/2018/build-up-your-leadership-toolbox/ business
249 Fast Autocomplete Search for Your Website Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive. I’m going to build a search engine for 24 ways that’s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I’ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface. Try it out here, then read on to see how I built it. First step: crawling the data The first step in building a search engine is to grab a copy of the data that you plan to make searchable. There are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don’t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need. I’m going to do exactly that against 24 ways: I’ll build a simple crawler using wget, a command-line tool that features a powerful “recursive” mode that’s ideal for scraping websites. We’ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running. Then we’ll tell wget to recursively crawl the website, using the --recursive flag. We don’t want to fetch every single page on the site - we’re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 We want to be polite, so let’s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2 The first time I ran this, I accidentally downloaded the comments pages as well. We don’t want those, so let’s exclude them from the crawl using -X "/*/*/comments". Finally, it’s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this. Tie all of those options together and we get this command: wget --recursive --wait 2 --no-clobber -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 -X "/*/*/comments" https://24ways.org/archives/ If you leave this running for a few minutes, you’ll end up with a folder structure something like this: $ find 24ways.org 24ways.org 24ways.org/2013 24ways.org/2013/why-bother-with-accessibility 24ways.org/2013/why-bother-with-accessibility/index.html 24ways.org/2013/levelling-up 24ways.org/2013/levelling-up/index.html 24ways.org/2013/project-hubs 24ways.org/2013/project-hubs/index.html 24ways.org/2013/credits-and-recognition 24ways.org/2013/credits-and-recognition/index.html ... As a quick sanity check, let’s count the number of HTML pages we have retrieved: $ find 24ways.org | grep index.html | wc -l 328 There’s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren’t linked in the /archives/ yet so we need to point our crawler at the site’s front page instead: wget --recursive --wait 2 --no-clobber -I /2018 -X "/*/*/comments" https://24ways.org/ Thanks to --no-clobber, this is safe to run every day in December to pick up any new content. We now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let’s use them to build ourselves a search index. Building a search index using SQLite There are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL. I’m going to use something that’s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite. SQLite is the world’s most widely deployed database, even though many people have never even heard of it. That’s because it’s designed to be used as an embedded database: it’s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch! SQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn’t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications. This doesn’t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year. It turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension. I’ve been doing a lot of work with SQLite recently, and as part of that, I’ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It’s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that’s similar to the Observable notebooks Natalie described on 24 ways yesterday. If you haven’t used Jupyter before, here’s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn’t clash with any other installed packages: $ python3 -m venv ./jupyter-venv $ ./jupyter-venv/bin/pip install jupyter # ... lots of installer output # Now lets install some extra packages we will need later $ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib # And start the notebook web application $ ./jupyter-venv/bin/jupyter-notebook # This will open your browser to Jupyter at http://localhost:8888/ You should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook. A neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I’ve published the notebook I used to build the search index on my GitHub account. ​ Here’s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what’s going on. from pathlib import Path from bs4 import BeautifulSoup as Soup base = Path("/Users/simonw/Dropbox/Development/24ways-search") articles = list(base.glob("*/*/*/*.html")) # articles is now a list of paths that look like this: # PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html') docs = [] for path in articles: year = str(path.relative_to(base)).split("/")[1] url = 'https://' + str(path.relative_to(base).parent) + '/' soup = Soup(path.open().read(), "html5lib") author = soup.select_one(".c-continue")["title"].split( "More information about" )[1].strip() author_slug = soup.select_one(".c-continue")["href"].split( "/authors/" )[1].split("/")[0] published = soup.select_one(".c-meta time")["datetime"] contents = soup.select_one(".e-content").text.strip() title = soup.find("title").text.split(" ◆")[0] try: topic = soup.select_one( '.c-meta a[href^="/topics/"]' )["href"].split("/topics/")[1].split("/")[0] except TypeError: topic = None docs.append({ "title": title, "contents": contents, "year": year, "author": author, "author_slug": author_slug, "published": published, "url": url, "topic": topic, }) After running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this: [ { "title": "Why Bother with Accessibility?", "contents": "Web accessibility (known in other fields as inclus...", "year": "2013", "author": "Laura Kalbag", "author_slug": "laurakalbag", "published": "2013-12-10T00:00:00+00:00", "url": "https://24ways.org/2013/why-bother-with-accessibility/", "topic": "design" }, { "title": "Levelling Up", "contents": "Hello, 24 ways. Iu2019m Ashley and I sell property ins...", "year": "2013", "author": "Ashley Baxter", "author_slug": "ashleybaxter", "published": "2013-12-06T00:00:00+00:00", "url": "https://24ways.org/2013/levelling-up/", "topic": "business" }, ... My sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here’s how to do that using this list of dictionaries. import sqlite_utils db = sqlite_utils.Database("/tmp/24ways.db") db["articles"].insert_all(docs) That’s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table. (I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.) You can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this: $ sqlite3 /tmp/24ways.db sqlite> .headers on sqlite> .mode column sqlite> select title, author, year from articles; title author year ------------------------------ ------------ ---------- Why Bother with Accessibility? Laura Kalbag 2013 Levelling Up Ashley Baxte 2013 Project Hubs: A Home Base for Brad Frost 2013 Credits and Recognition Geri Coady 2013 Managing a Mind Christopher 2013 Run Ragged Mark Boulton 2013 Get Started With GitHub Pages Anna Debenha 2013 Coding Towards Accessibility Charlie Perr 2013 ... <Ctrl+D to quit> There’s one last step to take in our notebook. We know we want to use SQLite’s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this: db["articles"].enable_fts(["title", "author", "contents"]) Introducing Datasette Datasette is the open-source tool I’ve been building that makes it easy to both explore SQLite databases and publish them to the internet. We’ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn’t it be nice if we could use a more human-friendly interface for that? If you don’t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I’ll show you how to deploy Datasette to Heroku like this later in the article. If you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter: ./jupyter-venv/bin/pip install datasette This will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette. Now you can run Datasette against the 24ways.db file we created earlier like so: ./jupyter-venv/bin/datasette /tmp/24ways.db This will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application. If you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db Publishing the database to the internet One of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here’s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku’s free tier) using datasette publish: $ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways -----> Python app detected -----> Installing requirements with pip -----> Running post-compile hook -----> Discovering process types Procfile declares types -> web -----> Compressing... Done: 47.1M -----> Launching... Released v8 https://search-24ways.herokuapp.com/ deployed to Heroku If you try this out, you’ll need to pick a different --name, since I’ve already taken search-24ways. You can run this command as many times as you like to deploy updated versions of the underlying database. Searching and faceting Datasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action. ​ SQLite search supports wildcards, so if you want autocomplete-style search where you don’t need to enter full words to start getting results you can add a * to the end of your search term. Here’s a search for access* which returns articles on accessibility: http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A A neat feature of Datasette is the ability to calculate facets against your data. Here’s a page showing search results for svg with facet counts calculated against both the year and the topic columns: http://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic Every page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL: http://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A Better search using custom SQL The search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they’re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page. This exposes the underlying SQL query, which looks like this: select rowid, * from articles where rowid in ( select rowid from articles_fts where articles_fts match :search ) order by rowid limit 101 We can do better than this by constructing a custom SQL query. Here’s the query we will use instead: select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, articles_fts.rank, articles.title, articles.url, articles.author, articles.year from articles join articles_fts on articles.rowid = articles_fts.rowid where articles_fts match :search || "*" order by rank limit 10; You can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it’s safe to allow users to provide arbitrary SQL select queries for Datasette to execute. There’s a lot going on here! Let’s break the SQL down line-by-line: select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, We’re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you’ll see why in the JavaScript later on. articles_fts.rank, articles.title, articles.url, articles.author, articles.year These are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table. from articles join articles_fts on articles.rowid = articles_fts.rowid articles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it. where articles_fts match :search || "*" order by rank limit 10; :search || "*" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results. How do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we’re going to use ?_shape=array to get back a plain array of objects: JSON API call to search for articles matching SVG The HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature. A simple JavaScript autocomplete search interface I considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own. We need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh: function debounce(func, wait, immediate) { let timeout; return function() { let context = this, args = arguments; let later = () => { timeout = null; if (!immediate) func.apply(context, args); }; let callNow = immediate && !timeout; clearTimeout(timeout); timeout = setTimeout(later, wait); if (callNow) func.apply(context, args); }; }; We’ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing. Since we’re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I’m amazed that browsers still don’t bundle a default one of these: const htmlEscape = (s) => s.replace( />/g, '&gt;' ).replace( /</g, '&lt;' ).replace( /&/g, '&' ).replace( /"/g, '&quot;' ).replace( /'/g, '&#039;' ); We need some HTML for the search form, and a div in which to render the results: <h1>Autocomplete search</h1> <form> <p><input id="searchbox" type="search" placeholder="Search 24ways" style="width: 60%"></p> </form> <div id="results"></div> And now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript: // Embed the SQL query in a multi-line backtick string: const sql = `select snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet, articles_fts.rank, articles.title, articles.url, articles.author, articles.year from articles join articles_fts on articles.rowid = articles_fts.rowid where articles_fts match :search || "*" order by rank limit 10`; // Grab a reference to the <input type="search"> const searchbox = document.getElementById("searchbox"); // Used to avoid race-conditions: let requestInFlight = null; searchbox.onkeyup = debounce(() => { const q = searchbox.value; // Construct the API URL, using encodeURIComponent() for the parameters const url = ( "https://search-24ways.herokuapp.com/24ways-866073b.json?sql=" + encodeURIComponent(sql) + `&search=${encodeURIComponent(q)}&_shape=array` ); // Unique object used just for race-condition comparison let currentRequest = {}; requestInFlight = currentRequest; fetch(url).then(r => r.json()).then(d => { if (requestInFlight !== currentRequest) { // Avoid race conditions where a slow request returns // after a faster one. return; } let results = d.map(r => ` <div class="result"> <h3><a href="${r.url}">${htmlEscape(r.title)}</a></h3> <p><small>${htmlEscape(r.author)} - ${r.year}</small></p> <p>${highlight(r.snippet)}</p> </div> `).join(""); document.getElementById("results").innerHTML = results; }); }, 100); // debounce every 100ms There’s just one more utility function, used to help construct the HTML results: const highlight = (s) => htmlEscape(s).replace( /b4de2a49c8/g, '<b>' ).replace( /8c94a2ed4b/g, '</b>' ); This is what those unique strings passed to the snippet() function were for. Avoiding race conditions in autocomplete One trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case: User types acces Browser sends request A - querying documents matching acces* User continues to type accessibility Browser sends request B - querying documents matching accessibility* Request B returns. It was fast, because there are fewer documents matching the full term The results interface updates with the documents from request B, matching accessibility* Request A returns results (this was the slower of the two requests) The results interface updates with the documents from request A - results matching access* This is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on. Thankfully there’s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null. Any time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well. When the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results. It’s not a lot of code, really And that’s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don’t think it matters. You’re welcome to refactor it as much you like. How good is this search implementation? I’ve been building search engines for a long time using a wide variety of technologies and I’m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it’s based on SQL makes it easy and flexible to work with. A surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite. More importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you’re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now. For more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter. 2018 Simon Willison simonwillison 2018-12-19T00:00:00+00:00 https://24ways.org/2018/fast-autocomplete-search-for-your-website/ code
248 How to Use Audio on the Web I know what you’re thinking. I never never want to hear sound anywhere near a browser, ever ever, wow! 🙉 You’re having flashbacks, flashbacks to the days of yore, when we had a <bgsound> element and yup did everyone think that was the most rad thing since <blink>. I mean put those two together with a <marquee>, only use CSS colour names, make sure your borders were all set to ridge and you’ve got yourself the neatest website since 1998. The sound played when the website loaded and you could play a MIDI file as well! Everyone could hear that wicked digital track you chose. Oh, surfing was gnarly back then. Yes it is 2018, the end of in fact, soon to be 2019. We are certainly living in the future. Hoverboards self driving cars, holodecks VR headsets, rocket boots drone racing, sound on websites get real, Ruth. We can’t help but be jaded, even though the <bgsound> element is depreciated, and the autoplay policy appeared this year. Although still in it’s infancy, the policy “controls when video and audio is allowed to autoplay”, which should reduce the somewhat obtrusive playing of sound when a website or app loads in the future. But then of course comes the question, having lived in a muted present for so long, where and why would you use audio? ✨ Showcase Time ✨ There are some incredible uses of audio on websites today. This is my personal favourite futurelibrary.no, a site from Norway chronicling books that have been published from a forest of trees planted precisely for the books themselves. The sound effects are lovely, adding to the overall experience. futurelibrary.no Another site that executes this well is pottermore.com. The Hogwarts WebGL simulation uses both sound effects and ambient background music and gives a great experience. The button hovers are particularly good. pottermore.com Eighty-six and a half years is a beautiful narrative site, documenting the musings of an eighty-six and a half year old man. The background music playing on this site is not offensive, it adds to the experience. Eighty-six and a half years Sound can be powerful and in some cases useful. Last year I wrote about using them to help validate forms. Audiochart is a library which “allows the user to explore charts on web pages using sound and the keyboard”. Ben Byford recorded voice descriptions of the pages on his website for playback should you need or want it. There is a whole area of accessibility to be explored here. Then there’s education. Fancy beginning with some piano in the new year? flowkey.com is a website which allows you to play along and learn at the same time. Need to brush up on your music theory? lightnote.co takes you through lessons to do just that, all audio enhanced. Electronic music more your thing? Ableton has your back with learningmusic.ableton.com, a site which takes you through the process of composing electronic music. A website, all made possible through the powers with have with the Web Audio API today. lightnote.co learningmusic.ableton.com Considerations Yes, tis the season, let’s be more thoughtful about our audios. There are some user experience patterns to begin with. 86andahalfyears.com tells the user they are about to ‘enter’ the site and headphones are recommended. This is a good approach because it a) deals with the autoplay policy (audio needs to be instigated by a user gesture) and b) by stating headphones are recommended you are setting the users expectations, they will expect sound, and if in a public setting can enlist the use of a common electronic device to cause less embarrassment. Eighty-six and a half years Allowing mute and/or volume control clearly within the user interface is a good idea. It won’t draw the user out of the experience, it’ll give more control to the user about what audio they want to hear (they may not want to turn down the volume of their entire device), and it’s less thought to reach for a very visible volume than to fumble with device settings. Indicating that sound is playing is also something to consider. Browsers do this by adding icons to tabs, but this isn’t always the first place to look for everyone. To The Future So let’s go! We see amazing demos built with Web Audio, and I’m sure, like me, they make you think, oh wow I wish I could do that / had thought of that / knew the first thing about audio to begin to even conceive that. But audio doesn’t actually need to be all bells and whistles (hey, it’s Christmas). Starting, stopping and adjusting simple panning and volume might be all you need to get started to introduce some good sound design in your web design. Isn’t it great then that there’s a tutorial just for that! Head on over to the MDN Web Audio API docs where the Using the Web Audio API article takes you through playing and pausing sounds, volume control and simple panning (moving the sound from left to right on stereo speakers). This year I believe we have all experienced the web as a shopping mall more than ever. It’s shining store fronts, flashing adverts, fast food, loud noises. Let’s use 2019 to create more forests to explore, oceans to dive and mountains to climb. 2018 Ruth John ruthjohn 2018-12-22T00:00:00+00:00 https://24ways.org/2018/how-to-use-audio-on-the-web/ design
247 Managing Flow and Rhythm with CSS Custom Properties An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn’t just make your web pages and views look better, but it can also improve the scan-ability. Browsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as <div> and <section> also have no default margin or padding associated with them. I’ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I’m going to show you how it works. Let’s dive in! The Flow utility With the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it’s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid. That’s fine for 90% of contexts, but sometimes, it’s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy. The code .flow { --flow-space: 1em; } .flow > * + * { margin-top: var(--flow-space); } What this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don’t have any idea what surrounds them. The other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let’s look at some examples. A basic example See the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/LXqerj What we’ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there’s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility. Because our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a <h2> in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we’d affect everything because margin values would be calculated as 1.1X the font size. This example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though? See the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/qQgxaY I like lots of whitespace with my article layouts, so the 1em space isn’t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances: h2 { --flow-space: 3rem; } Notice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it’s better for accessibility to use flexible units like em, rem and %, so that a user’s font size preferences are honoured. A more advanced example Although the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space. See the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/ZmPGyL In this example, we’ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article’s content would space itself: <article class="media__content flow"> ... </article> Something to remember though: the custom properties cascade in the same way that other CSS values do, so you’ve got to keep that in mind. We’ve got a great example of that in this example where because we’ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we’ve had to set another value on the .features__list element. “But what about old browsers?”, I hear you cry We’re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element’s font-size: .flow { --flow-space: 1em; } .flow > * + * { margin-top: 1em; margin-top: var(--flow-space); } Thanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them—those that don’t will ignore them. Yay for the cascade and progressive enhancement! Wrapping up This tiny little utility can bring great power for when you want to consistently space elements, vertically. It also—thanks to the power of the modern web—allows us to create contextual overrides without creating modifier classes or shame CSS. If you’ve got other methods of doing this sort of work, please let me know on Twitter. I’d love to see what you’re working on! 2018 Andy Bell andybell 2018-12-07T00:00:00+00:00 https://24ways.org/2018/managing-flow-and-rhythm-with-css-custom-properties/ code
246 Designing Your Site Like It’s 1998 It’s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary—and my fourteenth contribution to 24 ways— I’d like to explain how I would’ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films. My design for Planes, Trains and Automobiles is fixed at 800px wide. Developing a <frameset> framework I’ll start by using frames to set up the framework for this new website. Frames are individual pages—one for navigation, the other for my content—pulled together to form a frameset. Space is limited on lower-resolution screens, so by using frames I can ensure my navigation always remains visible. I can include any number of frames inside a <frameset> element. I add two rows to my <frameset>; the first is for my navigation and is 50px tall, the second is for my content and will resize to fill any available space. As I don’t want frame borders or any space between my frames, I set frameborder and framespacing attributes to 0: <frameset frameborder="0" framespacing="0" rows="50,*"> […] </frameset> Next I add the source of my two frame documents. I don’t want people to be able to resize or scroll my navigation, so I add the noresize attribute to that frame: <frameset frameborder="0" framespacing="0" rows="50,*"> <frame noresize scrolling="no" src="nav.html"> <frame src="content.html"> </frameset> I do want links from my navigation to open in the content frame, so I give each <frame> a name so I can specify where I want links to open: <frameset frameborder="0" framespacing="0" rows="50,*"> <frame name="navigation" noresize scrolling="no" src="nav.html"> <frame name="content" src="content.html"> </frameset> The framework for this website is simple as it contains only two horizontal rows. Should I need a more complex layout, I can nest as many framesets—and as many individual documents—as I need: <frameset rows="50,*"> <frame name="navigation"> <frameset cols="25%,*"> <frame name="sidebar"> <frame name="content"> </frameset> </frameset> Letterbox framesets were common way to deal with multiple screen sizes. In a letterbox, the central frameset had a fixed height and width, while the frames on the top, right, bottom, and left expanded to fill any remaining space. Handling older browsers Sadly not every browser supports frames, so I should send a helpful message to people who use older browsers asking them to upgrade. Happily, I can do that using noframes content: <noframes> <body> <p>This page uses frames, but your browser doesn’t support them. Please upgrade your browser.</p> </body> </noframes> Forcing someone back into a frame Sometimes, someone may follow a link to a page from a portal or search engine, or they might attempt to open it in a new window or tab. If that page properly belongs inside a <frameset>, people could easily miss out on other parts of a design. This short script will prevent this happening and because it’s vanilla Javascript, it doesn’t require a library such as jQuery: <script type="text/javascript"> if (top == self) { location = 'frameset.html'; } </script> Laying out my page Before starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website: <body background="img/container.jpg" bgcolor="#fef7fb" link="#245eab" alink="#245eab" vlink="#3c146e" text="#000000"> I want absolute control over how people experience my design and don’t want to allow it to stretch, so I first need a <table> which limits the width of my layout to 800px. The align attribute will keep this <table> in the centre of someone’s screen: <table width="800" align="center"> <tr> <td>[…]</td> </tr> </table> Although they were developed for displaying tabular information, the cells and rows which make up the <table> element make it ideal for the precise implementation of a design. I need several tables—often nested inside each other—to implement my design. These include tables for a banner and three rows of content: <table width="800" align="center"> <table>[…]</table> <table> <table> <table>[…]</table> </table> </table> <table>[…]</table> <table>[…]</table> </table> The width of the first table—used for my banner—is fixed to match the logo it contains. As I don’t need borders, padding, or spacing between these cells, I use attributes to remove them: <table border="0" cellpadding="0" cellspacing="0" width="587" align="center"> <tr> <td><img src="logo.gif" border="0" width="587" alt="Logo"></td> </tr> </table> The next table—which contains the largest image, introduction, and a call-to-action—is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images: <tr> <td><img src="spacer.gif" width="593" height="1"></td> <td><img src="spacer.gif" width="207" height="1"></td> </tr> The height and width of these “shims” or “spacers” is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development. For the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images: <tr> <td background="slice-1.jpg" height="473"> </td> <td background="slice-2.jpg" height="473">[…]</td> </tr> <tr> <td background="slice-3.jpg" height="388"> </td> </tr> I use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table—this time with a white background—inside it: <table border="0" cellpadding="1" cellspacing="0"> <tr> <td> <table bgcolor="#ffffff" border="0" cellpadding="0" cellspacing="0"> […] </table> </td> </tr> </table> Adding details Tables are fabulous tools for laying out a page, but they’re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my “Buy the DVD” call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button: <table border="0" cellpadding="0" cellspacing="0"> <tr> <td background="btn-1.jpg" border="0" height="48" width="30"><img src="spacer.gif" width="30" height="1"></td> <td background="btn-2.jpg" border="0" height="48"> <center> <a href="" target="_blank"><b>Buy the DVD</b></a> </center> </td> <td background="btn-3.jpg" border="0" height="48" width="30"><img src="spacer.gif" width="30" height="1"></td> </tr> </table> I use those same elements to add details to headlines and lists too. Adding a “bullet” to each item in a list needs only two additional table cells, a circular graphic, and a spacer: <table border="0" cellpadding="0" cellspacing="0"> <tr> <td width="10"><img src="li.gif" border="0" width="8" height="8"> </td> <td><img src="spacer.gif" width="10" height="1"> </td> <td>Directed by John Hughes</td> </tr> </table> Implementing a typographic hierarchy So far I’ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use <font> elements to change the typeface from the browser’s default to any font installed on someone’s device: <font face="Arial">Planes, Trains and Automobiles is a comedy film […]</font> To adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser’s default. I use a size of 4 for this headline and 2 for the text which follows: <font face="Arial" size="4"><b>Steve Martin</b></font> <font face="Arial" size="2">An American actor, comedian, writer, producer, and musician.</font> When I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website. NB: I use as many <br> elements as needed to create space between headlines and paragraphs. View the final result (and especially the source.) My modern day design for Planes, Trains and Automobiles. I can imagine many people reading this and thinking “This is terrible advice because we don’t develop websites like this in 2018.” That’s true. We have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes: font-variant-caps: titling-caps; font-variant-ligatures: common-ligatures; font-variant-numeric: oldstyle-nums; Grid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS: body { display: grid; grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr; grid-template-rows: auto; grid-column-gap: 2vw; grid-row-gap: 1vh; } Flexbox has made it easy to develop flexible components such as navigation links: nav ul { display: flex; } nav li { flex: 1; } Just one line of CSS can create multiple columns of fluid type: main { column-width: 12em; } CSS Shapes enable text to flow around irregular shapes including polygons: [src*="main-img"] { float: left; shape-outside: polygon(…); } Today, we wouldn’t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead: .btn { background: linear-gradient(#8B1212, #DD3A3C); border-radius: 1em; box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50); } CSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we’re certain we know how to design and develop products and websites better than we did at the end of 1998. Strange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That’s why it’s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today—tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass—will be any more relevant in the future than <font> elements, frames, layout tables, and spacer images are today. I have no prediction for what the web will be like twenty years from now. However, I want to believe we’ll build on what we’ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won’t be repeated. Head over to my website if you’d like to read about how I’d implement my design for ‘Planes, Trains and Automobiles’ today. 2018 Andy Clarke andyclarke 2018-12-23T00:00:00+00:00 https://24ways.org/2018/designing-your-site-like-its-1998/ code
245 Web Content Accessibility Guidelines 2.1—for People Who Haven’t Read the Update Happy United Nations International Day of Persons with Disabilities 2018! The United Nations chose “Empowering persons with disabilities and ensuring inclusiveness and equality” as this year’s theme. We’ve seen great examples of that in 2018; for example, Paul Robert Lloyd has detailed how he improved the accessibility of this very website. On social media, US Congressmember-Elect Alexandria Ocasio-Cortez started using the Clipomatic app to add live captions to her Instagram live stories, conforming to success criterion 1.2.4, “Captions (Live)” of the Web Content Accessibility Guidelines (figure 1) …and British Vogue Contributing Editor Sinéad Burke has used the split-screen feature of Instagram live stories to invite an interpreter to provide live Sign Language interpretation, going above and beyond success criterion 1.2.6, “Sign Language (Prerecorded)” of the Web Content Accessibility Guidelines (figure 2). Figure 1: Screenshot of Alexandria Ocasio-Cortez’s Instagram story with live captionsFigure 2: Screenshot of Sinéad Burke’s Instagram story with Sign Language Interpretation That theme chimes with this year’s publication of the World Wide Web Consortium (W3C)’s Web Content Accessibility Guidelines (WCAG) 2.1. In last year’s “Web Content Accessibility Guidelines—for People Who Haven’t Read Them”, I mentioned the scale of the project to produce this update during 2018: “the editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible”. The WCAG working group have added 17 success criteria to the 61 that they released way back in 2008—for context, that was 1½ years before Apple released their first iPad! These new criteria make it easier than ever for us web geeks to produce work that is more accessible to people using mobile devices and touchscreens, people with low vision, and people with cognitive and learning disabilities. Once again, let’s rip off all the legalese and ambiguous terminology like wrapping paper, and get up to date. Can your users perceive the information on your website? The first guideline has criteria that help you prevent your users from asking, “What the **** is this thing here supposed to be?” We’ve seven new criteria for this guideline. 1.3.4 Some people can’t easily change the orientation of the device that they use to browse the web, and so you should make sure that your users can use your website in portrait orientation and in landscape orientation. Consider how people slowly twirl presents that they have plucked from under the Christmas tree, to find the appropriate orientation—and expect your users to do likewise with your websites and apps. We’ve had 18½ years since John Allsopp’s revelatory Dao of Web Design enlightened us to “embrace the fact that the web doesn’t have the same constraints” as printed pages, and to “design for this flexibility”. So, even though this guideline doesn’t apply to websites where “a specific display orientation is essential,” such as a piano tutorial, always ask yourself, “What would John Allsopp do?” 1.3.5 You should help the user’s browser to automatically complete–or not complete–form fields, to save the user some time and effort. The surprisingly powerful and flexible autocomplete attribute for input elements should prove most useful here. If you’ve used microformats or microdata to mark up information about a person, the autocomplete attribute’s range of values should seem familiar. I like how the W3’s “Using HTML 5.2 autocomplete attributes” says that autocompleted values in forms help “those with dexterity disabilities who have trouble typing, those who may need more time, and anyone who wishes to reduce effort to fill out a form” (emphasis mine). Um…🙋‍♂️ 1.3.6 I like this one a lot, because it can help a huge audience to overcome difficulties that might prevent them from ever using the web. Some people have cognitive difficulties that affect their memory, focus, attention, language processing, and/or decision-making. Those users often rely on assistive technologies that present information through proprietary symbols, summaries of content, and keyboard shortcuts. You could use ARIA landmarks to identify the regions of each webpage. You could also keep an eye on the W3C’s ongoing work on Personalisation Semantics. 1.4.10 If you were to find a Nintendo Switch and “Super Mario Odyssey” under your Christmas tree, you would have many hours of enjoyably scrolling horizontally and vertically to play the game. On the other hand, if you had to zoom a webpage to 400% so that you could read the content, you might have many hours of frustratedly scrolling horizontally and vertically to read the content. Learned reader, I assume you understand the purpose and the core techniques of Responsive Web Design. I also assume you’re getting up to speed with the new Grid, Flexbox, and Box Alignment techniques for layout, and overflow-wrap. Using those skills, you should make sure that all content and functionality remain available when the browser is 320px wide, without your user needing to scroll horizontally. (For vertical text, you should make sure that all content and functionality remain available when the browser is 256px high, without your user needing to scroll vertically.) You don’t have to do this for anything that would lose meaning if you restructured it into one narrow column. That includes some images, maps, diagrams, video, games, presentations, and data tables. Remember to check how your media queries affect font size: your user might find that text becomes smaller as they zoom into the webpage. So, test this one on real devices, or—better yet—test it with real users. 1.4.11 In “Web Content Accessibility Guidelines—for People Who Haven’t Read Them”, I recommended bookmarking Lea Verou’s Contrast Ratio calculator for checking that text contrasts enough with its background (for success criteria 1.4.3 and 1.4.6), so that more people can read it more easily. For this update, you should make sure that form elements and their focus states have a 3:1 contrast ratio with the colour around them. This doesn’t apply to controls that use the browser’s default styling. Also, you should make sure that graphics that convey information have a 3:1 contrast ratio with the colour around them. 1.4.12 Some people, due to low vision or dyslexia, might need to modify the typography that you agonised over. Research indicates that you should make sure that all content and functionality would remain available if a user were to set: line height to at least 1½ × the font size; space below paragraphs to at least 2 × the font size; letter spacing to at least 0.12 × the font size; word spacing to at least 0.16 × the font size. To test this, check for text overlapping, text hiding behind other elements, or text disappearing. 1.4.13 Sometimes when visiting a website, you hover over—or tab on to—something that unleashes a newsletter subscription pop-up, some suggested “related content”, and/or a GDPR-related pop-up. On a well-designed website, you can press the Esc key on your keyboard or click a prominent “Close” button or “X” button to vanquish such intrusions. If the Esc key fails you, or if you either can’t see or can’t click the “Close” button…well, you’ll probably just close that browser tab. This situation can prove even more infuriating for users with low vision or cognitive disabilities. So, if new content appears when your user hovers over or tabs on to some element, you should make sure that: your user can dismiss that content without needing to move their pointer or tab on to some other element (this doesn’t apply to error warnings, or well-behaved content that doesn’t obscure or replace other content); the new content remains visible while your user moves their cursor over it; the new content remains visible as long as the user hovers over that element or dismisses that content—or until the new content is no longer valid. This doesn’t apply to situations such as hovering over an element’s title attribute, where the user’s browser controls the display of the content that appears. Can users operate the controls and links on your website? The second guideline has criteria that help you prevent your users from asking, “How the **** does this thing work?” We’ve nine new criteria for this guideline. 2.1.4 Some websites offer keyboard shortcuts for users. For example, the keyboard shortcuts for Gmail allow the user to press the ⇧ key and u to mark a message as unread. Usually, shortcuts on websites include modifier keys, such as Ctrl, along with a letter, number, or punctuation symbol. Unfortunately, users who have dexterity challenges sometimes trigger those shortcuts by accident, and that can make a website impossible to use. Also, speech input technology can sometimes trigger those shortcuts. If your website offers single-character keyboard shortcuts, you must allow your user to turn off or remap those shortcuts. This doesn’t apply to single-character keyboard shortcuts that only work when a control, such as drop-down list, has focus. 2.2.6 If your website uses a timeout for some process, you could store the user’s data for at least 20 hours, so that users with cognitive disabilities can take a break or take longer than usual to complete the process without losing their place or losing their data. Alternatively, you could warn the user, at the start of the process, about that the website will timeout after whatever amount of time you have chosen. 2.3.3 If your website has some non-essential animation (such as parallax scrolling) that starts when the user does some particular action, you could allow the user to turn off that animation so that you avoid harming users with vestibular disorders. The prefers-reduced-motion media query currently has limited browser support, but you can start using it now to avoid showing animations to users who select the “Reduce Motion” setting (or equivalent) in their device’s operating system: @media (prefers-reduced-motion: reduce) { .MrFancyPants { animation: none; } } 2.5.1 Some websites let users use multi-touch gestures on touchscreen devices. For example, Google Maps allows users to pinch with two fingers to zoom out and “unpinch” with two fingers to zoom in. Also, some websites allow users to drag a finger to do some action, such as changing the value on an input element with type="range", or swiping sideways to the next photograph in a gallery. Some users with dexterity challenges, and some users who use a head pointer, an eye-gaze system, or speech-controlled mouse emulation, might find multi-touch gestures or dragging impossible. You must make sure that your website supports single-tap alternatives to any multi-touch gestures or dragging actions that it provides. For example, if your website lets someone pinch and unpinch a map to zoom in and out, you must also provide buttons that a user can tap to zoom in and out. 2.5.2 This might be my favourite accessibility criterion ever! Did you ever touch or press a “Send” button but then immediately realise that you really didn’t want to send the message, and so move your finger or cursor away from the “Send” button before lifting your finger?! Imagine how many arguments that functionality has prevented. 😌 You must make sure that touching or pressing does not cause anything to happen before the user raises their finger or cursor, or make sure that the user can move their finger or cursor away to prevent the action. In JavaScript, prefer onclick to onmousedown, unless your website has actions that need onmousedown. Also, this doesn’t apply to actions that need to happen as soon as the user clicks or touches. For example, a user playing a “Whac-A-Mole” game or a piano emulator needs the action to happen as soon as they click or touch the screen. 2.5.3 Recently, entrepreneur and social media guru Gary Vaynerchuk has emphasised the rise of audio and voice as output and input. He quotes a Google statistic that says one in five search queries use voice input. Once again, users with disabilities have been ahead of the curve here, having used screen readers and/or dictation software for many years. You must make sure that the text that appears on a form control or image matches how your HTML identifies that form control or image. Use proper semantic HTML to achieve this: use the label element to pair text with the corresponding input element; use an alt attribute value that exactly matches any text that appears in an image; use an aria-labelledby attribute value that exactly matches the text that appears in any complex component. 2.5.4 Modern Web APIs allow web developers to specify how their website will react to the user shaking, tilting, or gesturing towards their device. Some users might find those actions difficult, impossible, or embarrassing to perform. If you make any functionality available when the user shakes, tilts, or gestures towards their device, you must provide form controls that make that same functionality available. As usual, this doesn’t apply to websites that require shaking, tilting, or gesturing; this includes some games and music programmes. John Gruber describes the iPhone’s “Shake to Undo” gesture as “dreadful — impossible to discover through exploration of the on-screen [user interface], bad for accessibility, and risks your phone flying out of your hand”. This accessibility criterion seems to empathise with John: you must make sure that your user can prevent your website from responding to shaking, tilting and/or gesturing towards their device. 2.5.5 Homer Simpson’s telephone famously complained, “The fingers you have used to dial are too fat.” I think we’ve all felt like that when using phones and tablets, particularly when trying to dismiss pop-ups and ads. You could make interactive elements at least 44px wide × 44px high. Apple’s “Human Interface Guidelines” agree: “Provide ample touch targets for interactive elements. Try to maintain a minimum tappable area of 44pt x 44pt for all controls.” This doesn’t apply to links within inline text, or to unsoiled elements. 2.5.6 Expect your users to use a variety of input devices they want, and to change from one to another whenever they please. For example, a user with a tablet and keyboard might jab icons on the screen while typing on the keyboard, or a user might dictate text while alone and then type on a keyboard when a colleague arrives. You could make sure that your website allows your users to use whichever available input modality they choose. Once again, this doesn’t apply to websites that require a specific modality; this includes typing tutors and music programmes. Can users understand your content? The third guideline has criteria that help you prevent your users from asking, “What the **** does this mean?” We’ve no new criteria for this guideline. Have you made your website robust enough to work on your users’ browsers and assistive technologies? The fourth and final guideline has criteria that help you prevent your users from asking, “Why the **** doesn’t this work on my device?” We’ve one new criterion for this guideline. 4.1.3 Sometimes you need to let your user know the status of something: “Did it work OK? What was the error? How far through it are we?” However, you should avoid making your user lose their place on the webpage, and so you should let them know the status without opening a new window, focusing on another element, or submitting a form. To do this properly for assistive technology users, choose the appropriate ARIA role for the new content; for example: if your user needs to know, “Did it work OK?”, add role="status”; if your user needs to know, “What was the error?”, add role="alert”; if you user needs to know, “How far through it are we?”, add role="log" (for a chat window) or role="progressbar" (for, well, a progress bar). Better design for humans My favourite of Luke Wroblewski’s collection of Design Quotes is, “Design is the art of gradually applying constraints until only one solution remains,” from that most prolific author, “Unknown”. I’ve always viewed the Web Content Accessibility Guidelines as people-based constraints, and liked how they help the design process. With these 17 new web content accessibility criteria, go forth and create solutions that more people than ever before can use. Spending those book vouchers you got for Christmas What next? If you’re looking for something to do to keep you busy this Christmas, I thoroughly recommend these four books for increasing your accessibility expertise: “Pro HTML5 Accessibility” by Joshue O Connor (Head of Accessibility (Interim) at the UK Government Digital Service, Director of InterAccess, and one of the editors of the Web Content Accessibility Guidelines 2.1): Although this book is six years old—a long time in web design—I find it an excellent go-to resource. It begins by explaining how people with disabilities use the web, and then expertly explains modern HTML in that context. “A Web for Everyone—Designing Accessible User Experiences” by Sarah Horton (the Paciello Group’s UX Strategy Lead) and Whitney Quesenbery (the Center for Civic Design’s co-director): This book covers the Web Content Accessibility Guidelines 2.0, the principles of Universal Design, and design thinking. Its personas for Accessible UX and its profiles of well-known industry figures—including some 24ways authors—keep its content practical and relevant throughout. “Accessibility For Everyone” by Laura Kalbag (Ind.ie’s co-founder and designer, and 24ways author): This book is just over a year old, and so serves as a great resource for up-to-date coverage of guidelines, laws, and accessibility features of operating systems—as well as content, design, coding, and testing. The audiobook, which Laura narrates, can help you and your colleagues go from having little or no understanding of web accessibility, to becoming familiar with all aspects of web accessibility—in less than four hours. “Just Ask: Integrating Accessibility Throughout Design” by Shawn Lawton Henry (the World Wide Web Consortium (W3C)’s Web Accessibility Initiative (WAI)’s Outreach Coordinator): Although this book is 11½ years old, the way it presents accessibility as part of the User-Centered Design process is timeless. I found its section on Usability Testing with people with disabilities particularly useful. 2018 Alan Dalton alandalton 2018-12-03T00:00:00+00:00 https://24ways.org/2018/wcag-for-people-who-havent-read-the-update/ ux
244 It’s Beginning to Look a Lot Like XSSmas I dread the office Secret Santa. I have a knack for choosing well-meaning but inappropriate presents, like a bottle of port for a teetotaller, a cheese-tasting experience for a vegan, or heaven forbid, Spurs socks for an Arsenal supporter. Ok, the last one was intentional. It’s the same with gifting code. Once, I made a pattern library for A List Apart which I open sourced, and a few weeks later, a glaring security vulnerability was found in it. My gift was so generous that it enabled unrestricted access to any file on any public-facing server that hosted it. With platforms like GitHub and npm, giving the gift of code is so easy it’s practically a no-brainer. This giant, open source yankee swap helps us do our jobs without starting from scratch with every project. But like any gift-giving, it’s also risky. Vulnerabilities and Open Source Open source code is not inherently more or less vulnerable than closed-source code. What makes it higher risk is that the same piece of code gets reused in lots of places, meaning a hacker can use the same exploit mechanism on the same vulnerable code in different apps. Graph showing the number of open source vulnerabilities published per year, from the State of Open Source Security 2017 In the first 24 ways article this year, Katie referenced a few different types of vulnerability: Cross-site Request Forgery (also known as CSRF) SQL Injection Cross-site Scripting (also known as XSS) There are many more types of vulnerability, and those that live under the same category share similarities. For example, my favourite – is it weird to have a favourite vulnerability? – is Cross Site Scripting (XSS), which allows for the injection of scripts into web pages. This is a really common vulnerability often unwittingly added by developers. OWASP (the Open Web Application Security Project) wrote a great article about how to prevent opening the door to XSS attacks – share it generously with your colleagues. Most vulnerabilities like this are not added intentionally – they’re doors left ajar due to the way something has been scripted, like the over-generous code in my pattern library. Others, though, are added intentionally. A few months ago, a hacker, disguised as a helpful elf, offered to take over the maintenance of a popular npm package that had been unmaintained for a couple of years. The owner had moved onto other projects, and was keen to see it continue to be maintained by someone else, so transferred ownership. Fast-forward 3 months, it was discovered that the individual had quietly added a malicious package to the codebase, and the obfuscated code in it had been unwittingly installed onto thousands of apps. The code added was designed to harvest Bitcoin if it was run alongside another application. It was only spotted due to a developer’s curiosity. Another tactic to get developers to unwittingly install malicious packages into their codebase is “typosquatting” – back in August last year, npm reported that a user had been publishing packages with very similar names to popular packages (for example, crossenv instead of cross-env). This is a big wakeup call for open source maintainers. Techniques like this are likely to be used more as the maintenance of open source libraries becomes an increasing burden to their owners. After all, starting a new project often has a greater reward than maintaining an existing one, but remember, an open source library is for life, not just for Christmas. Santa’s on his sleigh If you use open source libraries, chances are that these libraries also use open source libraries. Your app may only have a handful of dependencies, but tucked in the back of that sleigh may be a whole extra sack of dependencies known as deep dependencies (ones that you didn’t directly install, but are dependencies of that dependency), and these can contain vulnerabilities too. Let’s look at the npm package santa as an example. santa has 8 direct dependencies listed on npm. That seems pretty manageable. But that’s just the tip of the iceberg – have a look at the full dependency tree which contains 109 dependencies – more dependencies than there are Christmas puns in this article. Only one of these direct dependencies has a vulnerability (at the time of writing), but there are actually 13 other known vulnerabilities in santa, which have been introduced through its deeper dependencies. Fixing vulnerabilities – the ultimate christmas gift If you’re a maintainer of open source libraries, taking good care of them is the ultimate gift you can give. Keep your dependencies up to date, use a security tool to monitor and alert you when new vulnerabilities are found in your code, and fix or patch them promptly. This will help keep the whole open source ecosystem healthy. When you find out about a new vulnerability, you have some options: Fix the vulnerability via an upgrade You can often fix a vulnerability by upgrading the library to the latest version. Make sure you’re using software that monitors your dependencies for new security issues and lets you know when a fix is ready, otherwise you may be unwittingly using a vulnerable version. Patch the vulnerable code Sometimes, a fix for a vulnerable library isn’t possible. This is often the case when a library is no longer being maintained, or the version of the library being used might be so out of date that upgrading it would cause a breaking change. Patches are bits of code that will fix that particular issue, but won’t change anything else. Switch to a different library If the library you’re using has no fix or patch, you may be better of switching it out for another one, particularly if it looks like it’s being unmaintained. Responsibly disclosing vulnerabilities Knowing how to responsibly disclose vulnerabilities is something I’m ashamed to admit that I didn’t know about before I joined a security company. But it’s so important! On discovering a new vulnerability, a developer has a few options: A malicious developer will exploit that vulnerability for their own gain. A reckless (or inexperienced) developer will disclose that vulnerability to the world without following a responsible disclosure process. This opens the door to an unethical developer exploiting the vulnerability. At Snyk, we monitor social media for mentions of newly found vulnerabilities so we can add them to our database and share fixes before they get exploited. An ethical and aware developer will follow what’s known as a “responsible disclosure process”. They will contact the maintainer of the code privately, allowing reasonable time for them to release a fix for the issue and to give others who use that vulnerable code a chance to fix it too. It’s important to understand this process if you’re a maintainer or contributor of code. It can be daunting when a report comes in, but understanding and following the right steps will help reduce the risk to the people who use that code. So what does responsible disclosure look like? I’ll take Node.js’s security disclosure policy as an example. They ask that all security issues that are found in Node.js are reported there. (There’s a separate process for bug found in third-party npm packages). Once you’ve reported a vulnerability, they promise to acknowledge it within 24 hours, and to give a more detailed response within 48 hours. If they find that the issue is indeed a security bug, they’ll give you regular updates about the progress they’re making towards fixing it. As part of this, they’ll figure out which versions are affected, and prepare fixes for them. They’ll assign the vulnerability a CVE (Common Vulnerabilities and Exposures) ID and decide on an embargo date for public disclosure. On the date of the embargo, they announce the vulnerability in their Node.js security mailing list and deploy fixes to nodejs.org. Tim Kadlec published an in-depth article about responsible disclosures if you’re interested in knowing more. It has some interesting horror stories of what happened when the disclosure process was not followed. Encourage responsible disclosure Add a SECURITY.md file to your project so someone who wants to message you about a vulnerability can do so without having to hunt around for contact details. Last year, Snyk published a State of Open Source Security report that found 79.5% of maintainers do not have a public disclosure policy. Those that did were considerably more likely to get notified privately about a vulnerability – 73% of maintainers who had one had been notified, vs 21% of maintainers who hadn’t published one one. Stats from the State of Open Source Security 2017 Bug bounties Some companies run bug bounties to encourage the responsible disclosure of vulnerabilities. By offering a reward for finding and safely disclosing a vulnerability, it also reduces the enticement of exploiting a vulnerability over reporting it and getting a quick cash reward. Hackerone is a community of ethical hackers who pentest apps that have signed up for the scheme and get paid when they find a new vulnerability. Wordpress is one such participant, and you can see the long list of vulnerabilities that have been disclosed as part of that program. If you don’t have such a bounty, be prepared to get the odd vulnerability extortion email. Scott Helme, who founded securityheaders.com and report-uri.com, wrote a post about some of the requests he gets for a report about a critical vulnerability in exchange for money. On one hand, I want to be as responsible as possible and if my users are at risk then I need to know and patch this issue to protect them. On the other hand this is such irresponsible and unethical behaviour that interacting with this person seems out of the question. A gift worth giving It’s time to brush the dust off those old gifts that we shared and forgot about. Practice good hygiene and run them through your favourite security tool – I’m just a little biased towards Snyk, but as Katie mentioned, there’s also npm audit if you use Node.js, and most source code managers like GitHub and GitLab have basic vulnerability alert capabilities. Stats from the State of Open Source Security 2017 Most importantly, patch or upgrade away those vulnerabilities away, and if you want to share that Christmas spirit, open fixes for your favourite open source projects, too. 2018 Anna Debenham annadebenham 2018-12-17T00:00:00+00:00 https://24ways.org/2018/its-beginning-to-look-a-lot-like-xssmas/ code
243 Researching a Property in the CSS Specifications I frequently joke that I’m “reading the specs so you don’t have to”, as I unpack some detail of a CSS spec in a post on my blog, some documentation for MDN, or an article on Smashing Magazine. However waiting for someone like me to write an article about something is a pretty slow way to get the information you need. Sometimes people like me get things wrong, or specifications change after we write a tutorial. What if you could just look it up yourself? That’s what you get when you learn to read the CSS specifications, and in this article my aim is to give you the basic details you need to grab quick information about any CSS property detailed in the CSS specs. Where are the CSS Specifications? The easiest way to see all of the CSS specs is to take a look at the Current Work page in the CSS section of the W3C Website. Here you can see all of the specifications listed, the level they are at and their status. There is also a link to the specification from this page. I explained CSS Levels in my article Why there is no CSS 4. Who are the specifications for? CSS specifications are for everyone who uses CSS. You might be a browser engineer - referred to as an implementor - needing to know how to implement a feature, or a web developer - referred to as an author - wanting to know how to use the feature. The fact that both parties are looking at the same document hopefully means that what the browser displays is what the web developer expected. Which version of a spec should I look at? There are a couple of places you might want to look. Each published spec will have the latest published version, which will have TR in the URL and can be accessed without a date (which is always the newest version) or at a date, which will be the date of that publication. If I’m referring to a particular Working Draft in an article I’ll typically link to the dated version. That way if the information changes it is possible for someone to see where I got the information from at the time of writing. If you want the very latest additions and changes to the spec, then the Editor’s Draft is the place to look. This is the version of the spec that the editors are committing changes to. If I make a change to the Multicol spec and push it to GitHub, within a few minutes that will be live in the Editor’s Draft. So it is possible there are errors, bits of text that we are still working out and so on. The Editor’s Draft however is definitely the place to look if you are wanting to raise an issue on a spec, as it may be that the issue you are about to raise is already fixed. If you are especially keen on seeing updates to specifications keep an eye on https://drafts.csswg.org/ as this is a list of drafts, along with the date they were last updated. How to approach a spec The first thing to understand is that most CSS Specifications start with the most straightforward information, and get progressively further into the weeds. For an author the initial examples and explanations are likely to be of interest, and then the property definitions and examples. Therefore, if you are looking at a vast spec, know that you probably won’t need to read all the way to the bottom, or read every section in detail. The second thing that is useful to know about modern CSS specifications is how modularized they are. It really never is a case of finding everything you need in a single document. If we tried to do that, there would be a lot of repetition and likely inconsistency between specs. There are some key specifications that many other specifications draw on, such as: Values and Units Intrinsic and Extrinsic Sizing Box Alignment When something is defined in another specification the spec you are reading will link to it, so it is worth opening that other spec in a new tab in order that you can refer back to it as you explore. Researching your property As an example we will take a look at the property grid-auto-rows, this property defines row tracks in the implicit grid when using CSS Grid Layout. The first thing you will need to do is find out which specification defines this property. You might already know which spec the property is part of, and therefore you could go directly to the spec and search using your browser or look in the navigation for the spec to find it. Alternatively, you could take a look at the CSS Property Index, which is an automatically generated list of CSS Properties. Clicking on a property will take you to the TR version of the spec, the latest published draft, and the definition of that property in it. This definition begins with a panel detailing the syntax of this property. For grid-auto-rows, you can see that it is listed along with grid-auto-columns as these two properties are essentially identical. They take the same values and work in the same way, one for rows and the other for columns. Value For value we can see that the property accepts a value <track-size>. The next thing to do is to find out what that actually means, clicking will take you to where it is defined in the Grid spec. The <track-size> value is defined as accepting various values: <track-breadth> minmax( <inflexible-breadth> , <track-breadth> ) fit-content( <length-percentage> We need to head down the rabbit hole to find out what each of these mean. From here we essentially go down line by line until we have unpacked the value of track-size. <track-breadth> is defined just below <track-size> as: <length-percentage> <flex> min-content max-content auto So these are all things that would be valid to use as a value for grid-auto-rows. The first value of <length-percentage> is something you will see in many specifications as a value. It means that you can use a length unit - for example px or em - or a percentage. Some properties only accept a <length> in which case you know that you cannot use a percentage as the value. This means that you could have grid-auto-rows with any of the following values. grid-auto-rows: 100px; grid-auto-rows: 1em; grid-auto-rows: 30%; When using percentages, it is important to know what it is a percentage of. As a percentage has to resolve from something. There is text in the spec which explains how column and row percentages work. “<percentage> values are relative to the inline size of the grid container in column grid tracks, and the block size of the grid container in row grid tracks.” This means that in a horizontal writing mode such as when using English, a percentage when used as a track-size in grid-auto-columns would be a percentage of the width of the grid, and a percentage in grid-auto-rows a percentage of the height of the grid. The second value of <flex> is also defined here, as “A non-negative dimension with the unit fr specifying the track’s flex factor.” This is the fr unit, and the spec links to a fuller definition of fr as this unit is only used in Grid Layout so it is therefore defined in the grid spec. We now know that a valid value would be: grid-auto-rows: 1fr; There is some useful information about the fr unit in this part of the spec. It is noted that the fr unit has an automatic minimum. This means that 1fr is really minmax(auto, 1fr). This is why having a number of tracks all at 1fr does not mean that all are equal sized, as a larger item in any of the tracks would have a large auto size and therefore would be larger after spare space had been distributed. We then have min-content and max-content. These keywords can be used for track sizing and the specification defines what they mean in the context of sizing a track, representing the min and max-sizing contributions of the grid tracks. You will see that there are various terms linked in the definition, so if you do not know what these mean you can follow them to find out. For example the spec links max-content contribution to the CSS Intrinsic and Extrinsic Sizing specification. This is one of those specs which is drawn on by many other specifications. If we follow that link we can read the definition there and follow further links to understand what each term means. The more that you read specifications the more these terms will become familiar to you. Just like learning a foreign language, at first you feel like you have to look up every little thing. After a while you remember the vocabulary. We can now add min-content and max-content to our available values. grid-auto-rows: min-content; grid-auto-rows: max-content; The final item in our list is auto. If you are familiar with using Grid Layout, then you are probably aware that an auto sized track for will grow to fit the content used. There is an interesting note here in the spec detailing that auto sized rows will stretch to fill the grid container if there is extra space and align-content or justify-content have a value of stretch. As stretch is the default value, that means these tracks stretch by default. Tracks using other types of length will not behave like this. grid-auto-rows: auto; So, this was the list for <track-breadth>, the next possible value is minmax( <inflexible-breadth> , <track-breadth> ). So this is telling us that we can use minmax() as a value, the final (max) value will be <track-breadth> and we have already unpacked all of the allowable values there. The first value (min) is detailed as an <inflexible-breadth>. If we look at the values for this, we discover that they are the same as <track-breadth>, minus the <flex> value: <length-percentage> min-content max-content auto We already know what all of these do, so we can add possible minmax() values to our list of values for <track-size>. grid-auto-rows: minmax(100px, 200px); grid-auto-rows: minmax(20%, 1fr); grid-auto-rows: minmax(1em, auto); grid-auto-rows: minmax(min-content, max-content); Finally we can use fit-content( <length-percentage>. We can see that fit-content takes a value of <length-percentage> which we already know to be either a length unit, or a percentage. The spec details how fit-content is worked out, and it essentially allows a track which acts as if you had used the max-content keyword, however the track stops growing when it hits the length passed to it. grid-auto-rows: fit-content(200px); grid-auto-rows: fit-content(20%); Those are all of our possible values, and to round things off, check again at the initial <track-size> value, you can see it has a little + sign next to it, click that and you will be taken to the CSS Values and Units module to find that, “A plus (+) indicates that the preceding type, word, or group occurs one or more times.” This means that we can pass a single track size to grid-auto-rows or multiple track sizes as a space separated list. Below the box is an explanation of what happens if you pass in more than one track size: “If multiple track sizes are given, the pattern is repeated as necessary to find the size of the implicit tracks. The first implicit grid track after the explicit grid receives the first specified size, and so on forwards; and the last implicit grid track before the explicit grid receives the last specified size, and so on backwards.” Therefore with the following CSS, if five implicit rows were needed they would be as follows: 100px 1fr auto 100px 1fr .grid { display: grid; grid-auto-rows: 100px 1fr auto; } Initial We can now move to the next line in the box, and you’ll be glad to know that it isn’t going to require as much unpacking! This simply defines the initial value for grid-auto-rows. If you do not specify anything, created rows will be auto sized. All CSS properties have an initial value that they will use if they are invoked as part of the usage of the specification they are in, but you do not set a value for them. In the case of grid-auto-rows it is used whenever rows are created in the implicit grid, so it needs to have a value to be used even if you do not set one. Applies to This line tells us what this property is used for. Some properties are used in multiple places. For example if you look at the definition for justify-content in the Box Alignment specification you can see it is used in multicol containers, flex containers, and grid containers. In our case the property only applies for grid containers. Inherited This tells us if the property can be inherited from a parent element if it is not set. In the case of grid-auto-rows it is not inherited. A property such as color is inherited, so you do not need to set it on each element. Percentages Are percentages allowed for this property, and if so how are they calculated. In this case we are referred to the definition for grid-template-columns and grid-template-rows where we discover that the percentage is from the corresponding dimension of the content area. Media This defines the media group that the property belongs to. In this case visual. Computed Value This details how the value is resolved. The grid-auto-rows property again refers to track sizing as defined for grid-template-columns and grid-template-rows, which tells us the computed value is as specified with lengths made absolute. Canonical Order If you have a property–generally a shorthand property–which takes multiple values in a set order, then those values need to be serialized in the order detailed in the grammar for that property. In general you don’t need to worry about this value in the table. Animation Type This details whether the property can be animated, and if so what type of animation. This is useful if you are trying to animate something and not getting the result that you expect. Note that just because something is listed in the spec as animatable does not mean that browsers will have implemented animation for that property yet! That’s (mostly) it! Sometimes the property will have additional examples - there is one underneath the table for grid-auto-rows. These are worth looking at as they will highlight usage of the property that the spec editor has felt could use an example. There may also be some additional text explaining anythign specific to this property. In selecting grid-auto-rows I chose a fairly complex property in terms of the work we needed to do to unpack the value. Many properties are far simpler than this. However ultimately, even when you come across a complex value, it really is just a case of stepping through the definitions until you come to the bottom of the rabbit hole. Being able to work out what is valid for each property is incredibly useful. It means you don’t waste time trying to use a value that doesn’t work for that property. You also may find that there are values you weren’t aware of, that solve problems for you. Further reading Specifications are not designed to be user manuals, and while they often contain examples, these are pretty terse as they need to be clear to demonstrate their particular point. The manual for the Web Platform is MDN Web Docs. Pairing reading a specification with the examples and information on an MDN property page such as the one for grid-auto-rows is a really great way to ensure that you have all the information and practical usage examples you might need. You may also find useful: Value Definition Syntax on MDN. The MDN Glossary defines many common terms. Understanding the CSS Property Value Syntax goes into more detail in terms of reading the syntax. How to read W3C Specs - from 2001 but still relevant. I hope this article has gone some way to demystify CSS specifications for you. Even if the specifications are not your preferred first stop to learn about new CSS, being able to go directly to the source and avoid having your understanding filtered by someone else, can be very useful indeed. 2018 Rachel Andrew rachelandrew 2018-12-14T00:00:00+00:00 https://24ways.org/2018/researching-a-property-in-the-css-specifications/ code
242 Creating My First Chrome Extension Writing a Chrome Extension isn’t as scary at it seems! Not too long ago, I used a Chrome extension called 20 Cubed. I’m far-sighted, and being a software engineer makes it difficult to maintain distance vision. So I used 20 Cubed to remind myself to look away from my screen and rest my eyes. I loved its simple interface and design. I loved it so much, I often forgot to turn it off in the middle of presentations, where it would take over my entire screen. Oops. Unfortunately, the developer stopped updating the extension and removed it from Chrome’s extension library. I was so sad. None of the other eye rest extensions out there matched my design aesthetic, so I decided to create my own! Want to do the same? Fortunately, Google has some respectable documentation on how to create an extension. And remember, Chrome extensions are just HTML, CSS, and JavaScript. You can add libraries and frameworks, or you can just code the “old-fashioned” way. Sky’s the limit! Setup But first, some things you’ll need to know about before getting started: Callbacks Timeouts Chrome Dev Tools Developing with Chrome extension methods requires a lot of callbacks. If you’ve never experienced the joy of callback hell, creating a Chrome extension will introduce you to this concept. However, things can get confusing pretty quickly. I’d highly recommend brushing up on that subject before getting started. Hyperbole and a Half Timeouts and Intervals are another thing you might want to brush up on. While creating this extension, I didn’t consider the fact that I’d be juggling three timers. And I probably would’ve saved time organizing those and reading up on the Chrome extension Alarms documentation beforehand. But more on that in a bit. On the note of organization, abstraction is important! You might have any combination of the following: The Chrome extension options page The popup from the Chrome Menu The windows or tabs you create The background scripts And that can get unwieldy. You might also edit the existing tabs or windows in the browser, which you’ll probably want as a separate script too. Note that this tutorial only covers creating your own customized window rather than editing existing windows or tabs. Alright, now that you know all that up front, let’s get going! Documentation TL;DR READ THE DOCS. A few things to get started: Read Google’s primer on browser extensions Have a look at their Getting started tutorial Check out their overview on Chrome Extensions This overview discusses the Chrome extension files, architecture, APIs, and communication between pages. Funnily enough, I only discovered the Overview page after creating my extension. The manifest.json file gives the browser information about the extension, including general information, where to find your extension files and icons, and API permissions required. Here’s what my manifest.json looked like, for example: https://github.com/jennz0r/eye-rest/blob/master/manifest.json Because I’m a visual learner, I found the images that describe the extension’s architecture most helpful. To clarify this diagram, the background.js file is the extension’s event handler. It’s constantly listening for browser events, which you’ll feed to it using the Chrome Extension API. Google says that an effective background script is only loaded when it is needed and unloaded when it goes idle. The Popup is the little window that appears when you click on an extension’s icon in the Chrome Menu. It consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under page_action: { "default_popup": FILE_NAME_HERE }. The Options page is exactly as it says. This displays customizable options only visible to the user when they either right-click on the Chrome menu and choose “Options” under an extension. This also consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under options_page: FILE_NAME_HERE. Content scripts are any scripts that will interact with any web windows or tabs that the user has open. These scripts will also interact with any tabs or windows opened by your extension. Debugging A quick note: don’t forget the debugging tutorial! Just like any other Chrome window, every piece of an extension has an inspector and dev tools. If (read: when) you run into errors (as I did), you’re likely to have several inspector windows open – one for the background script, one for the popup, one for the options, and one for the window or tab the extension is interacting with. For example, I kept seeing the error “This request exceeds the MAX_WRITE_OPERATIONS_PER_HOUR quota.” Well, it turns out there are limitations on how often you can sync stored information. Another error I kept seeing was “Alarm delay is less than minimum of 1 minutes. In released .crx, alarm “ALARM_NAME_HERE” will fire in approximately 1 minutes”. Well, it turns out there are minimum interval times for alarms. Chrome Extension creation definitely benefits from debugging skills. Especially with callbacks and listeners, good old fashioned console.log can really help! Me adding a ton of `console.log`s while trying to debug my alarms. Eye Rest Functionality Ok, so what is the extension I created? Again, it’s a way to rest your eyes every twenty minutes for twenty seconds. So, the basic functionality should look like the following: If the extension is running AND If the user has not clicked Pause in the Popup HTML AND If the counter in the Popup HTML is down to 00:00 THEN Open a new window with Timer HTML AND Start a 20 sec countdown in Timer HTML AND Reset the Popup HTML counter to 20:00 If the Timer HTML is down to 0 sec THEN Close that window. Rinse. Repeat. Sounds simple enough, but wow, these timers became convoluted! Of all the Chrome extensions I decided to create, I decided to make one that’s heavily dependent on time, intervals, and having those in sync with each other. In other words, I made this unnecessarily complicated and didn’t realize until I started coding. For visual reference of my confusion, check out the GitHub repository for Eye Rest. (And yes, it’s a pun.) API Now let’s discuss the APIs that I used to build this extension. Alarms What even are alarms? I didn’t know either. Alarms are basically Chrome’s setTimeout and setInterval. They exist because, as Google says… DOM-based timers, such as window.setTimeout() or window.setInterval(), are not honored in non-persistent background scripts if they trigger when the event page is dormant. For more information, check out this background migration doc. One interesting note about alarms in Chrome extensions is that they are persistent. Garbage collection with Chrome extension alarms seems unreliable at best. I didn’t have much luck using the clearAll method to remove alarms I created on previous extension loads or installs. A workaround (read: hack) is to specify a unique alarm name every time your extension is loaded and clearing any other alarms without that unique name. Background Scripts For Eye Rest, I have two background scripts. One is my actual initializer and event listener, and the other is a helpers file. I wanted to share a couple of functions between my Background and Popup scripts. Specifically, the clearAndCreateAlarm function. I wanted my background script to clear any existing alarms, create a new alarm, and add remaining time until the next alarm to local storage immediately upon extension load. To make the function available to the Background script, I added helpers.js as the first item under background > scripts in my manifest.json. I also wanted my Popup script to do the same things when the user has unpaused the extension’s functionality. To make the function available to the Popup script, I just include the helpers script in the Popup HTML file. Other APIs Windows I use the Windows API to create the Timer window when the time of my alarm is up. The window creation is initiated by my Background script. One day, while coding late into the evening, I found it very confusing that the window.create method included url as an option. I assumed it was meant to be an external web address. A friend pondered that there must be an option to specify the window’s HTML. Until then, it hadn’t dawned on me that the url could be relative. Duh. I was tired! I pass the timer.html as the url option, as well as type, size, position, and other visual options. Storage Maybe you want to pass information back and forth between the Background script and your Popup script? You can do that using Chrome or local storage. One benefit of using local storage over Chrome’s storage is avoiding quotas and write operation maximums. I wanted to pass the time at which the latest alarm was set, the time to the next alarm, and whether or not the timer is paused between the Background and Popup scripts. Because the countdown should change every second, it’s quite complicated and requires lots of writes. That’s why I went with the user’s local storage. You can see me getting and setting those variables in my Background, Helper, and Popup scripts. Just search for date, nextAlarmTime, and isPaused. Declarative Content The Declarative Content API allows you to show your extension’s page action based on several type of matches, without needing to take a host permission or inject a content script. So you’ll need this to get your extension to work in the browser! You can see me set this in my Background script. Because I want my extension’s popup to appear on every page one is browsing, I leave the page matchers empty. There are many more APIs for Chrome apps and extensions, so make sure to surf around and see what features are available! The Extension Here’s what my original Popup looked like before I added styles. And here’s what it looks like with new styles. I guess I’m going for a Nickelodeon feel. And here’s the Timer window and Popup together! Publishing Publishing is a cinch. You just zip up your files, create a new or use an existing Google Developer account, upload the files, add some details, and pay a one time $5 fee. That’s all! Then your extension will be available on the Chrome extension store! Neato :D My extension is now available for you to install. Conclusion I thought creating a time based Chrome Extension would be quick and easy. I was wrong. It was more complicated than I thought! But it’s definitely achievable with some time, persistence, and good ole Google searches. Eventually, I’d like to add more interactive elements to Eye Rest. For example, hitting the YouTube API to grab a silly or cute video as a reward for looking away during the 20 sec countdown and not closing the timer window. This harkens back to one of my first web projects, Toothtimer, from 2012. Or maybe a way to change the background colors of the Timer and Popup! Either way, with Eye Rest’s framework built out, I’m feeling fearless about future feature adds! Building this Chrome extension took some broken nails, achy shoulders, and tired eyes, but now Eye Rest can tell me to give my eyes a break every 20 minutes. 2018 Jennifer Wong jenniferwong 2018-12-05T00:00:00+00:00 https://24ways.org/2018/my-first-chrome-extension/ code
241 Jank-Free Image Loads There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then — Your browser doesn’t support HTML5 video. Here is a link to the video instead. — oops! — an image pops in above it, pushing said text down the page, and our poor reader loses their place. By default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I’ll chart a path into a jank-free future – one in which it’s easy and natural to author <img> elements that load like this: Your browser doesn’t support HTML5 video. Here is a link to the video instead. Jank is a very old problem, and there is a very old solution to it: the width and height attributes on <img>. The idea is: if we stick an image’s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives. width Specifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network. —The HTML 3.2 Specification, published on January 14 1997 Unfortunately for us, when width and height were first spec’d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird: See the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen. width and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths. I have good news, bad news, and great news. The good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them. The bad news is that these techniques are all terrible, cumbersome hacks. They’re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways. So, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform. aspect-ratio in CSS The first proposed feature? An aspect-ratio property in CSS! This would allow us to write CSS like this: img { width: 100%; } .thumb { aspect-ratio: 1/1; } .hero { aspect-ratio: 16/9; } This’ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above. Alas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It’s images – possibly user generated images – that can have any aspect ratio. The really tricky problem is unknown-when-you’re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles: <img src="image.jpg" style="aspect-ratio: 5/4" /> And inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style="" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I’m cutting off my, (or anyone else’s) ability to change those aspect ratios, for whatever reason, later. How might we specify aspect ratios at a lower level? How might we give browsers information about an image’s dimensions, without giving them explicit instructions about how to style it? I’ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio! A brief note on intrinsic and extrinsic sizing What do I mean by “intrinsic” and “extrinsic?” The intrinsic size of an image is, put simply, how big it’d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800×600 image has an intrinsic width of 800px. The extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800×600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn’t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px. It surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity). CSS aspect-ratio lets us avoid setting extrinsic heights and widths – and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it. The last tool I’m going to talk about gets us out of the extrinsic sizing game all together — which, I think, is only appropriate for a feature that we’re going to be using in HTML. intrinsicsize in HTML The proposed intrinsicsize attribute will let you do this: <img src="image.jpg" intrinsicsize="800x600" /> That tells the browser, “hey, this image.jpg that I’m using here – I know you haven’t loaded it yet but I’m just going to let you know right away that it’s going to have an intrinsic size of 800×600.” This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image’s intrinsic size. You may ask (I did!): wait, what if my <img> references multiple resources, which all have different intrinsic sizes? Well, if you’re using srcset, intrinsicsize is a bit of a misnomer – what the attribute will do then, is specify an intrinsic aspect ratio: <img srcset="300x200.jpg 300w, 600x400.jpg 600w, 900x600.jpg 900w, 1200x800.jpg 1200w" sizes="75vw" intrinsicsize="3x2" /> In the future (and behind the “Experimental Web Platform Features” flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 – it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2. Can’t wait I seem to have gotten myself into the weeds a bit. Sizing on the web is complicated! Don’t let all of these details bury the big takeaway here: sometime soon (🤞 2019‽ 🤞), we’ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can’t wait! 2018 Eric Portis ericportis 2018-12-21T00:00:00+00:00 https://24ways.org/2018/jank-free-image-loads/ code
240 My CSS Wish List I love Christmas. I love walking around the streets of London, looking at the beautifully decorated windows, seeing the shiny lights that hang above Oxford Street and listening to Christmas songs. I’m not going to lie though. Not only do I like buying presents, I love receiving them too. I remember making long lists that I would send to Father Christmas with all of the Lego sets I wanted to get. I knew I could only get one a year, but I would spend days writing the perfect list. The years have gone by, but I still enjoy making wish lists. And I’ll tell you a little secret: my mum still asks me to send her my Christmas list every year. This time I’ve made my CSS wish list. As before, I’d be happy with just one present. Before I begin… … this list includes: things that don’t exist in the CSS specification (if they do, please let me know in the comments – I may have missed them); others that are in the spec, but it’s incomplete or lacks use cases and examples (which usually means that properties haven’t been implemented by even the most recent browsers). Like with any other wish list, the further down I go, the more unrealistic my expectations – but that doesn’t mean I can’t wish. Some of the things we wouldn’t have thought possible a few years ago have been implemented and our wishes fulfilled (think multiple backgrounds, gradients and transformations, for example). The list Cross-browser implementation of font-size-adjust When one of the fall-back fonts from your font stack is used, rather than the preferred (first) one, you can retain the aspect ratio by using this very useful property. It is incredibly helpful when the fall-back fonts are smaller or larger than the initial one, which can make layouts look less polished. What font-size-adjust does is divide the original font-size of the fall-back fonts by the font-size-adjust value. This preserves the x-height of the preferred font in the fall-back fonts. Here’s a simple example: p { font-family: Calibri, "Lucida Sans", Verdana, sans-serif; font-size-adjust: 0.47; } In this case, if the user doesn’t have Calibri installed, both Lucida Sans and Verdana will keep Calibri’s aspect ratio, based on the font’s x-height. This property is a personal favourite and one I keep pointing to. Firefox supported this property from version three. So far, it’s the only browser that does. Fontdeck provides the font-size-adjust value along with its fonts, and has a handy tool for calculating it. More control over overflowing text The text-overflow property lets you control text that overflows its container. The most common use for it is to show an ellipsis to indicate that there is more text than what is shown. To be able to use it, the container should have overflow set to something other than visible, and white-space: nowrap: div { white-space: nowrap; width: 100%; overflow: hidden; text-overflow: ellipsis; } This, however, only works for blocks of text on a single line. In the wish list of many CSS authors (and in mine) is a way of defining text-overflow: ellipsis on a block of multiple text lines. Opera has taken the first step and added support for the -o-ellipsis-lastline property, which can be used instead of ellipsis. This property is not part of the CSS3 spec, but we could certainly make good use of it if it were… WebKit has -webkit-line-clamp to specify how many lines to show before cutting with an ellipsis, but support is patchy at best and there is no control over where the ellipsis shows in the text. Many people have spent time wrangling JavaScript to do this for us, but the methods used are very processor intensive, and introduce a JavaScript dependency. Indentation and hanging punctuation properties You might notice a trend here: almost half of the items in this list relate to typography. The lack of fine-grained control over typographical detail is a general concern among designers and CSS authors. Indentation and hanging punctuation fall into this category. The CSS3 specification introduces two new possible values for the text-indent property: each-line; and hanging. each-line would indent the first line of the block container and each line after a forced line break; hanging would invert which lines are affected by the indentation. The proposed hanging-punctuation property would allow us to specify whether opening and closing brackets and quotes should hang outside the edge of the first and last lines. The specification is still incomplete, though, and asks for more examples and use cases. Text alignment and hyphenation properties Following the typographic trend of this list, I’d like to add better control over text alignment and hyphenation properties. The CSS3 module on Generated Content for Paged Media already specifies five new hyphenation-related properties (namely: hyphenate-dictionary; hyphenate-before and hyphenate-after; hyphenate-lines; and hyphenate-character), but it is still being developed and lacks examples. In the text alignment realm, the new text-align-last property allows you to define how the last line of a block (or a line just before a forced break) is aligned, if your text is set to justify. Its value can be: start; end; left; right; center; and justify. The text-justify property should also allow you to have more control over text set to text-align: justify but, for now, only Internet Explorer supports this. calc() This is probably my favourite item in the list: the calc() function. This function is part of the CSS3 Values and Units module, but it has only been implemented by Firefox (4.0). To take advantage of it now you need to use the Mozilla vendor code, -moz-calc(). Imagine you have a fluid two-column layout where the sidebar column has a fixed width of 240 pixels, and the main content area fills the rest of the width available. This is how you could create that using -moz-calc(): #main { width: -moz-calc(100% - 240px); } Can you imagine how many hacks and headaches we could avoid were this function available in more browsers? Transitions and animations are really nice and lovely but, for me, it’s the ability to do the things that calc() allows you to that deserves the spotlight and to be pushed for implementation. Selector grouping with -moz-any() The -moz-any() selector grouping has been introduced by Mozilla but it’s not part of any CSS specification (yet?); it’s currently only available on Firefox 4. This would be especially useful with the way HTML5 outlines documents, where we can have any number of variations of several levels of headings within numerous types of containers (think sections within articles within sections…). Here is a quick example (copied from the Mozilla blog post about the article) of how -moz-any() works. Instead of writing: section section h1, section article h1, section aside h1, section nav h1, article section h1, article article h1, article aside h1, article nav h1, aside section h1, aside article h1, aside aside h1, aside nav h1, nav section h1, nav article h1, nav aside h1, nav nav h1, { font-size: 24px; } You could simply write: -moz-any(section, article, aside, nav) -moz-any(section, article, aside, nav) h1 { font-size: 24px; } Nice, huh? More control over styling form elements Some are of the opinion that form elements shouldn’t be styled at all, since a user might not recognise them as such if they don’t match the operating system’s controls. I partially agree: I’d rather put the choice in the hands of designers and expect them to be capable of deciding whether their particular design hampers or improves usability. I would say the same idea applies to font-face: while some fear designers might go crazy and litter their web pages with dozens of different fonts, most welcome the freedom to use something other than Arial or Verdana. There will always be someone who will take this freedom too far, but it would be useful if we could, for example, style the default Opera date picker: <input type="date" /> or Safari’s slider control (think star movie ratings, for example): <input type="range" min="0" max="5" step="1" value="3" /> Parent selector I don’t think there is one CSS author out there who has never come across a case where he or she wished there was a parent selector. There have been many suggestions as to how this could work, but a variation of the child selector is usually the most popular: article < h1 { … } One can dream… Flexible box layout The Flexible Box Layout Module sounds a bit like magic: it introduces a new box model to CSS, allowing you to distribute and order boxes inside other boxes, and determine how the available space is shared. Two of my favourite features of this new box model are: the ability to redistribute boxes in a different order from the markup the ability to create flexible layouts, where boxes shrink (or expand) to fill the available space Let’s take a quick look at the second case. Imagine you have a three-column layout, where the first column takes up twice as much horizontal space as the other two: <body> <section id="main"> </section> <section id="links"> </section> <aside> </aside> </body> With the flexible box model, you could specify it like this: body { display: box; box-orient: horizontal; } #main { box-flex: 2; } #links { box-flex: 1; } aside { box-flex: 1; } If you decide to add a fourth column to this layout, there is no need to recalculate units or percentages, it’s as easy as that. Browser support for this property is still in its early stages (Firefox and WebKit need their vendor prefixes), but we should start to see it being gradually introduced as more attention is drawn to it (I’m looking at you…). You can read a more comprehensive write-up about this property on the Mozilla developer blog. It’s easy to understand why it’s harder to start playing with this module than with things like animations or other more decorative properties, which don’t really break your layouts when users don’t see them. But it’s important that we do, even if only in very experimental projects. Nested selectors Anyone who has never wished they could do something like the following in CSS, cast the first stone: article { h1 { font-size: 1.2em; } ul { margin-bottom: 1.2em; } } Even though it can easily turn into a specificity nightmare and promote redundancy in your style sheets (if you abuse it), it’s easy to see how nested selectors could be useful. CSS compilers such as Less or Sass let you do this already, but not everyone wants or can use these compilers in their projects. Every wish list has an item that could easily be dropped. In my case, I would say this is one that I would ditch first – it’s the least useful, and also the one that could cause more maintenance problems. But it could be nice. Implementation of the ::marker pseudo-element The CSS Lists module introduces the ::marker pseudo-element, that allows you to create custom list item markers. When an element’s display property is set to list-item, this pseudo-element is created. Using the ::marker pseudo-element you could create something like the following: Footnote 1: Both John Locke and his father, Anthony Cooper, are named after 17th- and 18th-century English philosophers; the real Anthony Cooper was educated as a boy by the real John Locke. Footnote 2: Parts of the plane were used as percussion instruments and can be heard in the soundtrack. where the footnote marker is generated by the following CSS: li::marker { content: "Footnote " counter(notes) ":"; text-align: left; width: 12em; } li { counter-increment: notes; } You can read more about how to use counters in CSS in my article from last year. Bear in mind that the CSS Lists module is still a Working Draft and is listed as “Low priority”. I did say this wish list would start to grow more unrealistic closer to the end… Variables The sight of the word ‘variables’ may make some web designers shy away, but when you think of them applied to things such as repeated colours in your stylesheets, it’s easy to see how having variables available in CSS could be useful. Think of a website where the main brand colour is applied to elements like the main text, headings, section backgrounds, borders, and so on. In a particularly large website, where the colour is repeated countless times in the CSS and where it’s important to keep the colour consistent, using variables would be ideal (some big websites are already doing this by using server-side technology). Again, Less and Sass allow you to use variables in your CSS but, again, not everyone can (or wants to) use these. If you are using Less, you could, for instance, set the font-family value in one variable, and simply call that variable later in the code, instead of repeating the complete font stack, like so: @fontFamily: Calibri, "Lucida Grande", "Lucida Sans Unicode", Helvetica, Arial, sans-serif; body { font-family: @fontFamily; } Other features of these CSS compilers might also be useful, like the ability to ‘call’ a property value from another selector (accessors): header { background: #000000; } footer { background: header['background']; } or the ability to define functions (with arguments), saving you from writing large blocks of code when you need to write something like, for example, a CSS gradient: .gradient (@start:"", @end:"") { background: -webkit-gradient(linear, left top, left bottom, from(@start), to(@end)); background: -moz-linear-gradient(-90deg,@start,@end); } button { .gradient(#D0D0D0,#9F9F9F); } Standardised comments Each CSS author has his or her own style for commenting their style sheets. While this isn’t a massive problem on smaller projects, where maybe only one person will edit the CSS, in larger scale projects, where dozens of hands touch the code, it would be nice to start seeing a more standardised way of commenting. One attempt at creating a standard for CSS comments is CSSDOC, an adaptation of Javadoc (a documentation generator that extracts comments from Java source code into HTML). CSSDOC uses ‘DocBlocks’, a term borrowed from the phpDocumentor Project. A DocBlock is a human- and machine-readable block of data which has the following structure: /** * Short description * * Long description (this can have multiple lines and contain <p> tags * * @tags (optional) */ CSSDOC includes a standard for documenting bug fixes and hacks, colours, versioning and copyright information, amongst other important bits of data. I know this isn’t a CSS feature request per se; rather, it’s just me pointing you at something that is usually overlooked but that could contribute towards keeping style sheets easier to maintain and to hand over to new developers. Final notes I understand that if even some of these were implemented in browsers now, it would be a long time until all vendors were up to speed. But if we don’t talk about them and experiment with what’s available, then it will definitely never happen. Why haven’t I mentioned better browser support for existing CSS3 properties? Because that would be the same as adding chocolate to your Christmas wish list – you don’t need to ask, everyone knows you want it. The list could go on. There are dozens of other things I would love to see integrated in CSS or further developed. These are my personal favourites: some might be less useful than others, but I’ve wished for all of them at some point. Part of the research I did while writing this article was asking some friends what they would add to their lists; other than a couple of items I already had in mine, everything else was different. I’m sure your list would be different too. So tell me, what’s on your CSS wish list? 2010 Inayaili de León Persson inayailideleon 2010-12-03T00:00:00+00:00 https://24ways.org/2010/my-css-wish-list/ code
239 Using the WebFont Loader to Make Browsers Behave the Same Web fonts give us designers a whole new typographic palette with which to work. However, browsers handle the loading of web fonts in different ways, and this can lead to inconsistent user experiences. Safari, Chrome and Internet Explorer leave a blank space in place of the styled text while the web font is loading. Opera and Firefox show text with the default font which switches over when the web font has loaded, resulting in the so-called Flash of Unstyled Text (aka FOUT). Some people prefer Safari’s approach as it eliminates FOUT, others think the Firefox way is more appropriate as content can be read whilst fonts download. Whatever your preference, the WebFont Loader can make all browsers behave the same way. The WebFont Loader is a JavaScript library that gives you extra control over font loading. It was co-developed by Google and Typekit, and released as open source. The WebFont Loader works with most web font services as well as with self-hosted fonts. The WebFont Loader tells you when the following events happen as a browser downloads web fonts (or loads them from cache): when fonts start to download (‘loading’) when fonts finish loading (‘active’) if fonts fail to load (‘inactive’) If your web page requires more than one font, the WebFont Loader will trigger events for individual fonts, and for all the fonts as a whole. This means you can find out when any single font has loaded, and when all the fonts have loaded (or failed to do so). The WebFont Loader notifies you of these events in two ways: by applying special CSS classes when each event happens; and by firing JavaScript events. For our purposes, we’ll be using just the CSS classes. Implementing the WebFont Loader As stated above, the WebFont Loader works with most web font services as well as with self-hosted fonts. Self-hosted fonts To use the WebFont Loader when you are hosting the font files on your own server, paste the following code into your web page: <script type="text/javascript"> WebFontConfig = { custom: { families: ['Font Family Name', 'Another Font Family'], urls: [ 'http://yourwebsite.com/styles.css' ] } }; (function() { var wf = document.createElement('script'); wf.src = ('https:' == document.location.protocol ? 'https' : 'http') + '://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js'; wf.type = 'text/javascript'; wf.async = 'true'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(wf, s); })(); </script> Replace Font Family Name and Another Font Family with a comma-separated list of the font families you want to check against, and replace http://yourwebsite.com/styles.css with the URL of the style sheet where your @font-face rules reside. Fontdeck Assuming you have added some fonts to a website project in Fontdeck, use the afore-mentioned code for self-hosted solutions and replace http://yourwebsite.com/styles.css with the URL of the <link> tag in your Fontdeck website settings page. It will look something like http://f.fontdeck.com/s/css/xxxx/domain/nnnn.css. Typekit Typekit’s JavaScript-based implementation incorporates the WebFont Loader events by default, so you won’t need to include any WebFont Loader code. Making all browsers behave like Safari To make Firefox and Opera work in the same way as WebKit browsers (Safari, Chrome, etc.) and Internet Explorer, and thus minimise FOUT, you need to hide the text while the fonts are loading. While fonts are loading, the WebFont Loader adds a class of wf-loading to the <html> element. Once the fonts have loaded, the wf-loading class is removed and replaced with a class of wf-active (or wf-inactive if all of the fonts failed to load). This means you can style elements on the page while the fonts are loading and then style them differently when the fonts have finished loading. So, let’s say the text you need to hide while fonts are loading is contained in all paragraphs and top-level headings. By writing the following style rule into your CSS, you can hide the text while the fonts are loading: .wf-loading h1, .wf-loading p { visibility:hidden; } Because the wf-loading class is removed once the the fonts have loaded, the visibility:hidden rule will stop being applied, and the text revealed. You can see this in action on this simple example page. That works nicely across the board, but the situation is slightly more complicated. WebKit doesn’t wait for all fonts to load before displaying text: it displays text elements as soon as the relevant font is loaded. To emulate WebKit more accurately, we need to know when individual fonts have loaded, and apply styles accordingly. Fortunately, as mentioned earlier, the WebFont Loader has events for individual fonts too. When a specific font is loading, a class of the form wf-fontfamilyname-n4-loading is applied. Assuming headings and paragraphs are styled in different fonts, we can make our CSS more specific as follows: .wf-fontfamilyname-n4-loading h1, .wf-anotherfontfamily-n4-loading p { visibility:hidden; } Note that the font family name is transformed to lower case, with all spaces removed. The n4 is a shorthand for the weight and style of the font family. In most circumstances you’ll use n4 but refer to the WebFont Loader documentation for exceptions. You can see it in action on this Safari example page (you’ll probably need to disable your cache to see any change occur). Making all browsers behave like Firefox To make WebKit browsers and Internet Explorer work like Firefox and Opera, you need to explicitly show text while the fonts are loading. In order to make this happen, you need to specify a font family which is not a web font while the fonts load, like this: .wf-fontfamilyname-n4-loading h1 { font-family: 'arial narrow', sans-serif; } .wf-anotherfontfamily-n4-loading p { font-family: arial, sans-serif; } You can see this in action on the Firefox example page (again you’ll probably need to disable your cache to see any change occur). And there’s more That’s just the start of what can be done with the WebFont Loader. More areas to explore would be tweaking font sizes to reduce the impact of reflowing text and to better cater for very narrow fonts. By using the JavaScript events much more can be achieved too, such as fading in text as the fonts load. 2010 Richard Rutter richardrutter 2010-12-02T00:00:00+00:00 https://24ways.org/2010/using-the-webfont-loader-to-make-browsers-behave-the-same/ code
238 Everything You Wanted To Know About Gradients (And a Few Things You Didn’t) Hello. I am here to discuss CSS3 gradients. Because, let’s face it, what the web really needed was more gradients. Still, despite their widespread use (or is it overuse?), the smartly applied gradient can be a valuable contributor to a designer’s vocabulary. There’s always been a tension between the inherently two-dimensional nature of our medium, and our desire for more intensity, more depth in our designs. And a gradient can evoke so much: the splay of light across your desk, the slow decrease in volume toward the end of your favorite song, the sunset after a long day. When properly applied, graded colors bring a much needed softness to our work. Of course, that whole ‘proper application’ thing is the tricky bit. But given their place in our toolkit and their prominence online, it really is heartening to see we can create gradients directly with CSS. They’re part of the draft images module, and implemented in two of the major rendering engines. Still, I’ve always found CSS gradients to be one of the more confusing aspects of CSS3. So if you’ll indulge me, let’s take a quick look at how to create CSS gradients—hopefully we can make them seem a bit more accessible, and bring a bit more art into the browser. Gradient theory 101 (I hope that’s not really a thing) Right. So before we dive into the code, let’s cover a few basics. Every gradient, no matter how complex, shares a few common characteristics. Here’s a straightforward one: I spent seconds hours designing this gradient. I hope you like it. At either end of our image, we have a final color value, or color stop: on the left, our stop is white; on the right, black. And more color-rich gradients are no different: (Don’t ever really do this. Please. I beg you.) It’s visually more intricate, sure. But at the heart of it, we have just seven color stops (red, orange, yellow, and so on), making for a fantastic gradient all the way. Now, color stops alone do not a gradient make. Between each is a transition point, the fail-over point between the two stops. Now, the transition point doesn’t need to fall exactly between stops: it can be brought closer to one stop or the other, influencing the overall shape of the gradient. A tale of two syntaxes Armed with our new vocabulary, let’s look at a CSS gradient in the wild. Behold, the simple input button: There’s a simple linear gradient applied vertically across the button, moving from a bright sunflowerish hue (#FAA51A, for you hex nuts in the audience) to a much richer orange (#F47A20). And here’s the CSS that makes it happen: input[type=submit] { background-color: #F47A20; background-image: -moz-linear-gradient( #FAA51A, #F47A20 ); background-image: -webkit-gradient(linear, 0 0, 0 100%, color-stop(0, #FAA51A), color-stop(1, #F47A20) ); } I’ve borrowed David DeSandro’s most excellent formatting suggestions for gradients to make this snippet a bit more legible but, still, the code above might have turned your stomach a bit. And that’s perfectly understandable—heck, it sort of turned mine. But let’s step through the CSS slowly, and see if we can’t make it a little less terrifying. Verbose WebKit is verbose Here’s the syntax for our little gradient on WebKit: background-image: -webkit-gradient(linear, 0 0, 0 100%, color-stop(0, #FAA51A), color-stop(1, #F47A20) ); Woof. Quite a mouthful, no? Well, here’s what we’re looking at: WebKit has a single -webkit-gradient property, which can be used to create either linear or radial gradients. The next two values are the starting and ending positions for our gradient (0 0 and 0 100%, respectively). Linear gradients are simply drawn along the path between those two points, which allows us to change the direction of our gradient simply by altering its start and end points. Afterward, we specify our color stops with the oh-so-aptly named color-stop parameter, which takes the stop’s position on the gradient (0 being the beginning, and 100% or 1 being the end) and the color itself. For a simple two-color gradient like this, -webkit-gradient has a bit of shorthand notation to offer us: background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#FAA51A), to(#FAA51A) ); from(#FAA51A) is equivalent to writing color-stop(0, #FAA51A), and to(#FAA51A) is the same as color-stop(1, #FAA51A) or color-stop(100%, #FAA51A)—in both cases, we’re simply declaring the first and last color stops in our gradient. Terse Gecko is terse WebKit proposed its syntax back in 2008, heavily inspired by the way gradients are drawn in the canvas specification. However, a different, leaner syntax came to the fore, eventually appearing in a draft module specification in CSS3. Naturally, because nothing on the web was meant to be easy, this is the one that Mozilla has implemented. Here’s how we get gradient-y in Gecko: background-image: -moz-linear-gradient( #FAA51A, #F47A20 ); Wait, what? Done already? That’s right. By default, -moz-linear-gradient assumes you’re trying to create a vertical gradient, starting from the top of your element and moving to the bottom. And, if that’s the case, then you simply need to specify your color stops, delimited with a few commas. I know: that was almost… painless. But the W3C/Mozilla syntax also affords us a fair amount of flexibility and control, by introducing features as we need them. We can specify an origin point for our gradient: background-image: -moz-linear-gradient(50% 100%, #FAA51A, #F47A20 ); As well as an angle, to give it a direction: background-image: -moz-linear-gradient(50% 100%, 45deg, #FAA51A, #F47A20 ); And we can specify multiple stops, simply by adding to our comma-delimited list: background-image: -moz-linear-gradient(50% 100%, 45deg, #FAA51A, #FCC, #F47A20 ); By adding a percentage after a given color value, we can determine its position along the gradient path: background-image: -moz-linear-gradient(50% 100%, 45deg, #FAA51A, #FCC 20%, #F47A20 ); So that’s some of the flexibility implicit in the W3C/Mozilla-style syntax. Now, I should note that both syntaxes have their respective fans. I will say that the W3C/Mozilla-style syntax makes much more sense to me, and lines up with how I think about creating gradients. But I can totally understand why some might prefer WebKit’s more verbose approach to the, well, looseness behind the -moz syntax. À chacun son gradient syntax. Still, as the language gets refined by the W3C, I really hope some consensus is reached by the browser vendors. And with Opera signaling that it will support the W3C syntax, I suppose it falls on WebKit to do the same. Reusing color stops for fun and profit But CSS gradients aren’t all simple colors and shapes and whatnot: by getting inventive with individual color stops, you can create some really complex, compelling effects. Tim Van Damme, whose brain, I believe, should be posthumously donated to science, has a particularly clever application of gradients on The Box, a site dedicated to his occasional podcast series. Now, there are a fair number of gradients applied throughout the UI, but it’s the feature image that really catches the eye. You see, there’s nothing that says you can’t reuse color stops. And Tim’s exploited that perfectly. He’s created a linear gradient, angled at forty-five degrees from the top left corner of the photo, starting with a fully transparent white (rgba(255, 255, 255, 0)). At the halfway mark, he’s established another color stop at an only slightly more opaque white (rgba(255, 255, 255, 0.1)), making for that incredibly gradual brightening toward the middle of the photo. But then he has set another color stop immediately on top of it, bringing it back down to rgba(255, 255, 255, 0) again. This creates that fantastically hard edge that diagonally bisects the photo, giving the image that subtle gloss. And his final color stop ends at the same fully transparent white, completing the effect. Hot? I do believe so. Rocking the radials We’ve been looking at linear gradients pretty exclusively. But I’d be remiss if I didn’t at least mention radial gradients as a viable option, including a modest one as a link accent on a navigation bar: And here’s the relevant CSS: background: -moz-radial-gradient(50% 100%, farthest-side, rgb(204, 255, 255) 1%, rgb(85, 85, 85) 15%, rgba(85, 85, 85, 0) ); background: -webkit-gradient(radial, 50% 100%, 0, 50% 100%, 15, from(rgb(204, 255, 255)), to(rgba(85, 85, 85, 0)) ); Now, the syntax builds on what we’ve already learned about linear gradients, so much of it might be familiar to you, picking out color stops and transition points, as well as the two syntaxes’ reliance on either a separate property (-moz-radial-gradient) or parameter (-webkit-gradient(radial, …)) to shift into circular mode. Mozilla introduces another stand-alone property (-moz-radial-gradient), and accepts a starting point (50% 100%) from which the circle radiates. There’s also a size constant defined (farthest-side), which determines the reach and shape of our gradient. WebKit is again the more verbose of the two syntaxes, requiring both starting and ending points (50% 100% in both cases). Each also accepts a radius in pixels, allowing you to control the skew and breadth of the circle. Again, this is a fairly modest little radial gradient. Time and article length (and, let’s be honest, your author’s completely inadequate grasp of geometry) prevent me from covering radial gradients in much more detail, because they are incredibly powerful. For those interested in learning more, I can’t recommend the references at Mozilla and Apple strongly enough. Leave no browser behind But no matter the kind of gradients you’re working with, there is a large swathe of browsers that simply don’t support gradients. Thankfully, it’s fairly easy to declare a sensible fallback—it just depends on the kind of fallback you’d like. Essentially, gradient-blind browsers will disregard any properties containing references to either -moz-linear-gradient, -moz-radial-gradient, or -webkit-gradient, so you simply need to keep your fallback isolated from those properties. For example: if you’d like to fall back to a flat color, simply declare a separate background-color: .nav { background-color: #000; background-image: -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45)); background-image: -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45))); } Or perhaps just create three separate background properties. .nav { background: #000; background: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45)); background: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45))); } We can even build on this to fall back to a non-gradient image: .nav { background: #000 <strong>url("faux-gradient-lol.png") repeat-x</strong>; background: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45)); background: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45))); } No matter the approach you feel most appropriate to your design, it’s really just a matter of keeping your fallback design quarantined from its CSS3-ified siblings. (If you’re feeling especially masochistic, there’s even a way to get simple linear gradients working in IE via Microsoft’s proprietary filters. Of course, those come with considerable performance penalties that even Microsoft is quick to point out, so I’d recommend avoiding those. And don’t tell Andy Clarke I told you, or he’ll probably unload his Derringer at me. Or something.) Go forth and, um, gradientify! It’s entirely possible your head’s spinning. Heck, mine is, but that might be the effects of the ’nog. But maybe you’re wondering why you should care about CSS gradients. After all, images are here right now, and work just fine. Well, there are some quick benefits that spring to mind: fewer HTTP requests are needed; CSS3 gradients are easily made scalable, making them ideal for variable widths and heights; and finally, they’re easily modifiable by tweaking a few CSS properties. Because, let’s face it, less time spent yelling at Photoshop is a very, very good thing. Of course, CSS-generated gradients are not without their drawbacks. The syntax can be confusing, and it’s still under development at the W3C. As we’ve seen, browser support is still very much in flux. And it’s possible that gradients themselves have some real performance drawbacks—so test thoroughly, and gradient carefully. But still, as syntaxes converge, and support improves, I think generated gradients can make a compelling tool in our collective belts. The tasteful design is, of course, entirely up to you. So have fun, and get gradientin’. 2010 Ethan Marcotte ethanmarcotte 2010-12-22T00:00:00+00:00 https://24ways.org/2010/everything-you-wanted-to-know-about-gradients/ code
237 Circles of Confusion Long before I worked on the web, I specialised in training photographers how to use large format, 5×4″ and 10×8″ view cameras – film cameras with swing and tilt movements, bellows and upside down, back to front images viewed on dim, ground glass screens. It’s been fifteen years since I clicked a shutter on a view camera, but some things have stayed with me from those years. In photography, even the best lenses don’t focus light onto a point (infinitely small in size) but onto ‘spots’ or circles in the ‘film/image plane’. These circles of light have dimensions, despite being microscopically small. They’re known as ‘circles of confusion’. As circles of light become larger, the more unsharp parts of a photograph appear. On the flip side, when circles are smaller, an image looks sharper and more in focus. This is the basis for photographic depth of field and with that comes the knowledge that no photograph can be perfectly focused, never truly sharp. Instead, photographs can only be ‘acceptably unsharp’. Acceptable unsharpness is now a concept that’s relevant to the work we make for the web, because often – unless we compromise – websites cannot look or be experienced exactly the same across browsers, devices or platforms. Accepting that fact, and learning to look upon these natural differences as creative opportunities instead of imperfections, can be tough. Deciding which aspects of a design must remain consistent and, therefore, possibly require more time, effort or compromises can be tougher. Circles of confusion can help us, our bosses and our customers make better, more informed decisions. Acceptable unsharpness Many clients still demand that every aspect of a design should be ‘sharp’ – that every user must see rounded boxes, gradients and shadows – without regard for the implications. I believe that this stems largely from the fact that they have previously been shown designs – and asked for sign-off – using static images. It’s also true that in the past, organisations have invested heavily in style guides which, while maybe still useful in offline media, have a strictness that often fails to allow for the flexibility that we need to create experiences that are appropriate to a user’s browser or device capabilities. We live in an era where web browsers and devices have wide-ranging capabilities, and websites can rarely look or be experienced exactly the same across them. Is a particular typeface vital to a user’s experience of a brand? How important are gradients or shadows? Are rounded corners really that necessary? These decisions determine how ‘sharp’ an element should be across browsers with different capabilities and, therefore, how much time, effort or extra code and images we devote to achieving consistency between them. To help our clients make those decisions, we can use circles of confusion. Circles of confusion Using circles of confusion involves plotting aspects of a visual design into a series of concentric circles, starting at the centre with elements that demand the most consistency. Then, work outwards, placing elements in order of their priority so that they become progressively ‘softer’, more defocused as they’re plotted into outer rings. If layout and typography must remain consistent, place them in the centre circle as they’re aspects of a design that must remain ‘sharp’. When gradients are important – but not vital – to a user’s experience of a brand, plot them close to, but not in the centre. This makes everyone aware that to achieve consistency, you’ll need to carve out extra images for browsers that don’t support CSS gradients. If achieving rounded corners or shadows in all browsers isn’t important, place them into outer circles, allowing you to save time by not creating images or employing JavaScript workarounds. I’ve found plotting aspects of a visual design into circles of confusion is a useful technique when explaining the natural differences between browsers to clients. It sets more realistic expectations and creates an environment for more meaningful discussions about progressive and emerging technologies. Best of all, it enables everyone to make better and more informed decisions about design implementation priorities. Involving clients allows the implications of the decisions they make more transparent. For me, this has sometimes meant shifting deadlines or it has allowed me to more easily justify an increase in fees. Most important of all, circles of confusion have helped the people that I work with move beyond yesterday’s one-size-fits-all thinking about visual design, towards accepting the rich diversity of today’s web. 2010 Andy Clarke andyclarke 2010-12-23T00:00:00+00:00 https://24ways.org/2010/circles-of-confusion/ process
236 Extreme Design Recently, I set out with twelve other designers and developers for a 19th century fortress on the Channel Island of Alderney. We were going to /dev/fort, a sort of band camp for geeks. Our cohort’s mission: to think up, build and finish something – without readily available internet access. Alderney runway, photo by Chris Govias Wait, no internet? Well, pretty much. As the creators of /dev/fort James Aylett and Mark Norman Francis put it: “Imagine a place with no distractions – no IM, no Twitter”. But also no way to quickly look up a design pattern, code sample or source material. Like packing for camping, /dev/fort means bringing everything you’ll need on your back or your hard drive: from long johns to your favourite icon set. We got to work the first night discussing ideas for what we wanted to build. By the time breakfast was cleared up the next morning, we’d settled on Russ’s idea to make the Apollo 13 (PDF) transcript accessible. Days two and three were spent collaboratively planning (KJ style) what features we wanted to build, and unravelling the larger UX challenges of the project. The next five days were spent building it. Within 36 hours of touchdown at Southampton Airport, we launched our creation: spacelog.org The weather was cold, the coal fire less than ideal, food and supplies a hike away, and the process lightning-fast. A week of designing under extreme circumstances called for an extreme process. Some of this was driven by James’s and Norm’s experience running these things, but a lot of it materialised while we were there – especially for our three-strong design team (myself, Gavin O’ Carroll and Chris Govias) who, though we knew each other, had never worked together as a group in this kind of scenario before. The outcome was a pretty spectacular process, with a some key takeaways useful for any small group trying to build something quickly. What it’s like inside the fort /dev/fort has the pressure and pace of a hack day without being a hack day – primarily, no workshops or interruptions‚ but also a different mentality. While hack days are typically developer-driven with a ‘hack first, design later (if at all)’ attitude, James was quick to tell the team to hold off from writing any code until we had a plan. This put a healthy pressure on the design and product folks to slash through the UX problems before we started building. While the fort had definitely more of a hack day feel, all of us were familiar with Agile methods, so we borrowed a few useful techniques such as morning stand-ups and an emphasis on teamwork. We cut some really good features to make our launch date, and chunked the work based on user goals, iterating as we went. What made this design process work? A golden ratio of teams My personal experience both professionally and in free-form situations like this, is a tendency to get/hire a designer. Leaders of businesses, founders of start-ups, organisers of events: one designer is not enough! Finding one ace-blooded designer who can ‘do everything’ will always result in bottleneck and burnout. Like the nuances between different development languages, design is a multifaceted discipline, and very few can claim to be equally strong in every aspect. Overlap in skill set will result in a stronger, more robust interface. More importantly, however, having lots of designers to go around meant that we all had the opportunity to pair with developers, polishing the details that don’t usually get polished. As soon as we launched, the public reception of the design and UX was overwhelmingly positive (proof!). But also, a lot of people asked us who the designer was, attributing it to one person. While it’s important to note that everyone in our team was multitalented (and could easily shift between roles, helping us all stay unblocked), the golden ratio James and Norm devised was two product/developer folks, three interaction designers and eight developers. photo by Ben Firshman Equality inside the fortress walls Something magical about the fort is how everyone leaves the outside world on the drawbridge. Job titles, professional status, Twitter followers, and so on. Like scout camp, a mutual respect and trust is expected of all the participants. Like extreme programming, extreme design requires us all to be equal partners in a collaborative team. I think this is especially worth noting for designers; our past is filled with the clear hierarchy of the traditional studio system which, however important for taste and style, seems less compatible with modern web/software development methods. Being equal doesn’t mean being the same, however. We established clear roles and teams for ourselves on the second day, deferring to that person when a decision needed to be made. As the interface coalesced, the designers and developers took ownership over certain parts to ensure the details got looked after, while staying open to ideas and revisions from the rest of the cohort. Create a space where everyone who enters is equal, but be sure to establish clear roles. Even if it’s just for a short while, the environment will be beneficial. photo by Ben Firshman Hang your heraldry from the rafters Forts and castles are full of lore: coats of arms; paintings of battles; suits of armour. It’s impossible not to be surrounded by these stories, words and ways of thinking. Like the whiteboards on the walls, putting organisational lore in your physical surroundings makes it impossible not to see. Ryan Alexander brought some of those static-cling whiteboard sheets which were quickly filled with use cases; IA; team roles; and, most importantly, a glossary. As soon as we started working on the project, we realised we needed to get clear on what certain words meant: what was a logline, a range, a phase, a key moment? Were the back-end people using these words in the same way design and product was? Quickly writing up a glossary of terms meant everyone was instantly speaking the same language. There was no “Ah, I misunderstood because in the data structure x means y” or, even worse, accidental seepage of technical language into the user interface copy. Put a glossary of your internal terminology somewhere big and fat on the wall. Stand around it and argue until you agree on what it says. Leave it up; don’t underestimate the power of ambient communication and physical reference. Plan more, download less While internet is forbidden inside the fort, we did go on downloading expeditions: NASA photography; code documentation; and so on. The project wouldn’t have been possible without a few trips to the web. We had two lists on the wall: groceries and supplies; internets – “loo roll; Tom Stafford photo“. This changed our usual design process, forcing us to plan carefully and think of what we needed ahead of time. Getting to the internet was a thirty-minute hike up a snow covered cliff to the town airport, so you really had to need it, too. The path to the internet For the visual design, especially, this resulted in more focus up front, and communication between the designers on what assets we required. It made us make decisions earlier and stick with them, creating less distraction and churn later in the process. Try it at home: unplug once you’ve got the things you need. As an artist, it’s easier to let your inner voice shine through if you’re not looking at other people’s work while creating. Social design Finally, our design team experimented with a collaborative approach to wireframing. Once we had collectively nailed down use cases, IA, user journeys and other critical artefacts, we tried a pairing approach. One person drew in Illustrator in real time as the other two articulated what to draw. (This would work equally well with two people, but with three it meant that one of us could jump up and consult the lore on the walls or clarify a technical detail.) The result: we ended up considering more alternatives and quickly rallying around one solution, and resolved difficult problems more quickly. At a certain stage we discovered it was more efficient for one person to take over – this happened around the time when the basic wireframes existed in Illustrator and we’d collectively run through the use cases, making sure that everything was accounted for in a broad sense. At this point, take a break, go have a beer, and give yourself a pat on the back. Put the files somewhere accessible so everyone can use them as their base, and divide up the more detailed UI problems, screens or journeys. At this level of detail it’s better to have your personal headspace. Gavin called this ‘social design’. Chatting and drawing in real time turned what was normally a rather solitary act into a very social process, with some really promising results. I’d tried something like this before with product or developer folks, and it can work – but there’s something really beautiful about switching places and everyone involved being equally quick at drawing. That’s not something you get with non-designers, and frequent swapping of the ‘driver’ and ‘observer’ roles is a key aspect to pairing. Tackle the forest collectively and the trees individually – it will make your framework more robust and your details more polished. Win/win. The return home Grateful to see a 3G signal on our phones again, our flight off the island was delayed, allowing for a flurry of domain name look-ups, Twitter catch-up, and e-mails to loved ones. A week in an isolated fort really made me appreciate continuous connectivity, but also just how unique some of these processes might be. You just never know what crazy place you might be designing from next. 2010 Hannah Donovan hannahdonovan 2010-12-09T00:00:00+00:00 https://24ways.org/2010/extreme-design/ process