{"rowid": 336, "title": "Practical Microformats with hCard", "contents": "You\u2019ve probably heard about microformats over the last few months. You may have even read the easily digestible introduction at Digital Web Magazine, but perhaps you\u2019ve not found time to actually implement much yet. That\u2019s understandable, as it can sometimes be difficult to see exactly what you\u2019re adding by applying a microformat to a page. Sure, you\u2019re semantically enhancing the information you\u2019re marking up, and the Semantic Web is a great idea and all, but what benefit is it right now, today? \n\nWell, the answer to that question is simple: you\u2019re adding lots of information that can be and is being used on the web here and now. The big ongoing battle amongst the big web companies if one of territory over information. Everyone\u2019s grasping for as much data as possible. Some of that information many of us are cautious to give away, but a lot of is happy to be freely available. Of the data you\u2019re giving away, it makes sense to give it as much meaning as possible, thus enabling anyone from your friends and family to the giant search company down the road to make the most of it.\n\nOk, enough of the waffle, let\u2019s get working.\n\nIntroducing hCard\n\nYou may have come across hCard. It\u2019s a microformat for describing contact information (or really address book information) from within your HTML. It\u2019s based on the vCard format, which is the format the contacts/address book program on your computer uses. All the usual fields are available \u2013 name, address, town, website, email, you name it.\n\nIf you\u2019re running Firefox and Greasemonkey (or if you can, just to try this out), install this user script. What it does is look for instances of the hCard microformat in a page, and then add in a link to pass any hCards it finds to a web service which will convert it to a vCard. Take a look at the About the author box at the bottom of this article. It\u2019s a hCard, so you should be able to click the icon the user script inserts and add me to your Outlook contacts or OS X Address Book with just a click.\n\nSo microformats are useful after all. Free microformats all round!\n\nImplementing hCard\n\nThis is the really easy bit. All the hCard microformat is, is a bunch of predefined class names that you apply to the markup you\u2019ve probably already got around your contact information. Let\u2019s take the example of the About the author box from this article. Here\u2019s how the markup looks without hCard:\n\n
\n

About the author

\n

Drew McLellan is a web developer, author and no-good swindler from \n just outside London, England. At the \n Web Standards Project he works \n on press, strategy and tools. Drew keeps a \n personal weblog covering web \n development issues and themes.

\n
\n\nThis is a really simple example because there\u2019s only two key bits of address book information here:- my name and my website address. Let\u2019s push it a little and say that the Web Standards Project is the organisation I work for \u2013 that gives us Name, Company and URL.\n\nTo kick off an hCard, you need a containing object with a class of vcard. The div I already have with a class of bio is perfect for this \u2013 all it needs to do is contain the rest of the contact information.\n\nThe next thing to identify is my name. hCard uses a class of fn (meaning Full Name) to identify a name. As is this case there\u2019s no element surrounding my name, we can just use a span. These changes give us:\n\n
\n

About the author

\n

Drew McLellan is a web developer...\n\nThe two remaining items are my URL and the organisation I belong to. The class names designated for those are url and org respectively. As both of those items are links in this case, I can apply the classes to those links. So here\u2019s the finished hCard.\n\n

\n

About the author

\n

Drew McLellan is a web developer, author and \n no-good swindler from just outside London, England. \n At the Web Standards Project \n he works on press, strategy and tools. Drew keeps a \n personal weblog covering web \n development issues and themes.

\n
\n\nOK, that was easy. By just applying a few easy class names to the HTML I was already publishing, I\u2019ve implemented an hCard that right now anyone with Greasemonkey can click to add to their address book, that Google and Yahoo! and whoever else can index and work out important things like which websites are associated with my name if they so choose (and boy, will they so choose), and in the future who knows what. In terms of effort, practically nil.\n\nWhere next?\n\nSo that was a trivial example, but to be honest it doesn\u2019t really get much more complex even with the most pernickety permutations. Because hCard is based on vCard (a mature and well thought-out standard), it\u2019s all tried and tested. Here\u2019s some good next steps.\n\n\n\tPlay with the hCard Creator\n\tTake a deep breath and read the spec\n\tStart implementing hCard as you go on your own projects \u2013 it takes very little time\n\n\nhCard is just one of an ever-increasing number of microformats. If this tickled your fancy, I suggest subscribing to the microformats site in your RSS reader to keep in touch with new developments.\n\nWhat\u2019s the take-away?\n\nThe take-away is this. They may sound like just more Web 2-point-HoHoHo hype, but microformats are a well thought-out, and easy to implement way of adding greater depth to the information you publish online. They have some nice benefits right away \u2013 certainly at geek-level \u2013 but in the longer term they become much more significant. We\u2019ve been at this long enough to know that the web has a long, long memory and that what you publish today will likely be around for years. But putting the extra depth of meaning into your documents now you can help guard that they\u2019ll continue to be useful in the future, and not just a bunch of flat ASCII.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-06T00:00:00+00:00", "url": "https://24ways.org/2005/practical-microformats-with-hcard/", "topic": "code"} {"rowid": 335, "title": "Naughty or Nice? CSS Background Images", "contents": "Web Standards based development involves many things \u2013 using semantically sound HTML to provide structure to our documents or web applications, using CSS for presentation and layout, using JavaScript responsibly, and of course, ensuring that all that we do is accessible and interoperable to as many people and user agents as we can.\n\nThis we understand to be good. \n\nAnd it is good. \n\nExcept when we don\u2019t clearly think through the full implications of using those techniques.\n\nWhich often happens when time is short and we need to get things done.\n\nHere are some naughty examples of CSS background images with their nicer, more accessible counterparts.\n\nTransaction related messages\n\nI\u2019m as guilty of this as others (or, perhaps, I\u2019m the only one that has done this, in which case this can serve as my holiday season confessional) We use lovely little icons to show status messages for a transaction to indicate if the action was successful, or was there a warning or error? For example:\n\n\u201cYour postal/zip code was not in the correct format.\u201d\n\nNotice that we place a nice little icon there, and use background colours and borders to convey a specific message: there was a problem that needs to be fixed. Notice that all of this visual information is now contained in the CSS rules for that div:\n\n
\n

Your postal/zip code was not in the correct format.

\n
\n\n div.error {\n background: #ffcccc url(../images/error_small.png) no-repeat 5px 4px;\n color: #900;\n border-top: 1px solid #c00;\n border-bottom: 1px solid #c00;\n padding: 0.25em 0.5em 0.25em 2.5em;\n font-weight: bold;\n }\n\nUsing this approach also makes it very easy to create a div.success and div.warning CSS rules meaning we have less to change in our HTML.\n\nNice, right?\n\nNo. Naughty. \n\nVisual design communicates\n\nThe CSS is being used to convey very specific information. The choice of icon, the choice of background colour and borders tell us visually that there is something wrong. \n\nWith the icon as a background image \u2013 there is no way to specify any alt text for the icon, and significant meaning is lost. A screen reader user, for example, misses the fact that it is an \u201cerror.\u201d \n\nThe solution? Ask yourself: what is the bare minimum needed to indicate there was an error? Currently in the absence of CSS there will be no icon \u2013 which (I\u2019m hoping you agree) is critical to communicating there was an error.\n\nThe icon should be considered content and not simply presentational.\n\nThe borders and background colour are certainly much less critical \u2013 they belong in the CSS.\n\nLets change the code to place the image directly in the HTML and using appropriate alt text to better communicate the meaning of the icon to all users:\n\n
\n \"Error\"\n

Your postal/zip code was not in the correct format.

\n
\n\n div.bettererror {\n background-color: #ffcccc;\n color: #900;\n border-top: 1px solid #c00;\n border-bottom: 1px solid #c00;\n padding: 0.25em 0.5em 0.25em 2.5em;\n font-weight: bold;\n position: relative;\n min-height: 1.25em;\n }\n\n div.bettererror img {\n display: block;\n position: absolute;\n left: 0.25em;\n top: 0.25em;\n padding: 0;\n margin: 0;\n }\n\n div.bettererror p {\n position: absolute;\n left: 2.5em;\n padding: 0;\n margin: 0;\n }\n\nCompare these two examples of transactional messages\n\nStatus of a Record\n\nThis example is pretty straightforward. Consider the following: a real estate listing on a web site. There are three \u201cstates\u201d for a listing: new, normal, and sold. Here\u2019s how they look:\n\nExample of a New Listing\n\nExample of A Sold Listing\n\nIf we (forgive the pun) blindly apply the \u201cuse a CSS background image\u201d technique we clearly run into problems with the new and sold images \u2013 they actually contain content with no way to specify an alternative when placed in the CSS.\n\nIn this case of the \u201cnew\u201d image, we can use the same strategy as we used in the first example (the transaction result). The \u201cnew\u201d image should be considered content and is placed in the HTML as part of the

...

that identifies the listing.\n\nHowever when considering the \u201csold\u201d listing, there are less changes to be made to keep the same look by leaving the \u201cSOLD\u201d image as a background image and providing the equivalent information elsewhere in the listing \u2013 namely, right in the heading.\n\nFor those that can\u2019t see the background image, the status is communicated clearly and right away. A screen reader user that is navigating by heading or viewing a listing will know right away that a particular property is sold.\n\nOf note here is that in both cases (new and sold) placing the status near the beginning of the record helps with a zoom layout as well.\n\nBetter Example of A Sold Listing\n\nSummary\n\nRemember: in the holiday season, its what you give that counts!! Using CSS background images is easy and saves time for you but think of the children. And everyone else for that matter\u2026 \n\nCSS background images should only be used for presentational images, not for those that contain content (unless that content is already represented and readily available elsewhere).", "year": "2005", "author": "Derek Featherstone", "author_slug": "derekfeatherstone", "published": "2005-12-20T00:00:00+00:00", "url": "https://24ways.org/2005/naughty-or-nice-css-background-images/", "topic": "code"} {"rowid": 334, "title": "Transitional vs. Strict Markup", "contents": "When promoting web standards, standardistas often talk about XHTML as being more strict than HTML. In a sense it is, since it requires that all elements are properly closed and that attribute values are quoted. But there are two flavours of XHTML 1.0 (three if you count the Frameset DOCTYPE, which is outside the scope of this article), defined by the Transitional and Strict DOCTYPEs. And HTML 4.01 also comes in those flavours.\n\nThe names reveal what they are about: Transitional DOCTYPEs are meant for those making the transition from older markup to modern ways. Strict DOCTYPEs are actually the default \u2013 the way HTML 4.01 and XHTML 1.0 were constructed to be used.\n\nA Transitional DOCTYPE may be used when you have a lot of legacy markup that cannot easily be converted to comply with a Strict DOCTYPE. But Strict is what you should be aiming for. It encourages, and in some cases enforces, the separation of structure and presentation, moving the presentational aspects from markup to CSS. From the HTML 4 Document Type Definition:\n\n\n\tThis is HTML 4.01 Strict DTD, which excludes the presentation attributes and elements that W3C expects to phase out as support for style sheets matures. Authors should use the Strict DTD when possible, but may use the Transitional DTD when support for presentation attribute and elements is required.\n\n\nAn additional benefit of using a Strict DOCTYPE is that doing so will ensure that browsers use their strictest, most standards compliant rendering modes.\n\nTommy Olsson provides a good summary of the benefits of using Strict over Transitional in Ten questions for Tommy Olsson at Web Standards Group:\n\n\n\tIn my opinion, using a Strict DTD, either HTML 4.01 Strict or XHTML 1.0 Strict, is far more important for the quality of the future web than whether or not there is an X in front of the name. The Strict DTD promotes a separation of structure and presentation, which makes a site so much easier to maintain.\n\n\nFor those looking to start using web standards and valid, semantic markup, it is important to understand the difference between Transitional and Strict DOCTYPEs. For complete listings of the differences between Transitional and Strict DOCTYPEs, see XHTML: Differences between Strict & Transitional, Comparison of Strict and Transitional XHTML, and XHTML1.0 Element Attributes by DTD.\n\nSome of the differences are more likely than others to cause problems for developers moving from a Transitional DOCTYPE to a Strict one, and I\u2019d like to mention a few of those.\n\nElements that are not allowed in Strict DOCTYPEs\n\n\n\tcenter\n\tfont\n\tiframe\n\tstrike\n\tu\n\n\nAttributes not allowed in Strict DOCTYPEs\n\n\n\talign (allowed on elements related to tables: col, colgroup, tbody, td, tfoot, th, thead, and tr)\n\tlanguage\n\tbackground\n\tbgcolor\n\tborder (allowed on table)\n\theight (allowed on img and object)\n\thspace\n\tname (allowed in HTML 4.01 Strict, not allowed on form and img in XHTML 1.0 Strict)\n\tnoshade\n\tnowrap\n\ttarget\n\ttext, link, vlink, and alink\n\tvspace\n\twidth (allowed on img, object, table, col, and colgroup)\n\n\nContent model differences\n\nAn element type\u2019s content model describes what may be contained by an instance of the element type. The most important difference in content models between Transitional and Strict is that blockquote, body, and form elements may only contain block level elements. A few examples:\n\n\n\ttext and images are not allowed immediately inside the body element, and need to be contained in a block level element like p or div\n\tinput elements must not be direct descendants of a form element\n\ttext in blockquote elements must be wrapped in a block level element like p or div\n\n\nGo Strict and move all presentation to CSS\n\nSomething that can be helpful when doing the transition from Transitional to Strict DOCTYPEs is to focus on what each element of the page you are working on is instead of how you want it to look.\n\nWorry about looks later and get the structure and semantics right first.", "year": "2005", "author": "Roger Johansson", "author_slug": "rogerjohansson", "published": "2005-12-13T00:00:00+00:00", "url": "https://24ways.org/2005/transitional-vs-strict-markup/", "topic": "code"} {"rowid": 333, "title": "The Attribute Selector for Fun and (no ad) Profit", "contents": "If I had a favourite CSS selector, it would undoubtedly be the attribute selector (Ed: You really need to get out more). For those of you not familiar with the attribute selector, it allows you to style an element based on the existence, value or partial value of a specific attribute.\n\nAt it\u2019s very basic level you could use this selector to style an element with particular attribute, such as a title attribute.\n\nCSS\n\nIn this example I\u2019m going to make all elements with a title attribute grey. I am also going to give them a dotted bottom border that changes to a solid border on hover. Finally, for that extra bit of feedback, I will change the cursor to a question mark on hover as well. \n\nabbr[title] {\n color: #666;\n border-bottom: 1px dotted #666;\n }\n\n abbr[title]:hover {\n border-bottom-style: solid;\n cursor: help;\n }\n\nThis provides a nice way to show your site users that elements with title tags are special, as they contain extra, hidden information.\n\nMost modern browsers such as Firefox, Safari and Opera support the attribute selector. Unfortunately Internet Explorer 6 and below does not support the attribute selector, but that shouldn\u2019t stop you from adding nice usability embellishments to more modern browsers.\n\nInternet Explorer 7 looks set to implement this CSS2.1 selector, so expect to see it become more common over the next few years.\n\nStyling an element based on the existence of an attribute is all well and good, but it is still pretty limited. Where attribute selectors come into their own is their ability to target the value of an attribute. You can use this for a variety of interesting effects such as styling VoteLinks.\n\nVoteWhats?\n\nIf you haven\u2019t heard of VoteLinks, it is a microformat that allows people to show their approval or disapproval of a links destination by adding a pre-defined keyword to the rev attribute.\n\nFor instance, if you had a particularly bad meal at a restaurant, you could signify your dissaproval by adding a rev attribute with a value of vote-against.\n\nMomma Cherri's\n\nYou could then highlight these links by adding an image to the right of these links.\n\na[rev=\"vote-against\"]{\n padding-right: 20px;\n background: url(images/vote-against.png) no-repeat right top;\n}\n\nThis is a useful technique, but it will only highlight VoteLinks on sites you control. This is where user stylesheets come into effect. If you create a user stylesheet containing this rule, every site you visit that uses VoteLinks will receive your new style.\n\nCool huh?\n\nHowever my absolute favourite use for attribute selectors is as a lightweight form of ad blocking. Most online adverts conform to industry-defined sizes. So if you wanted to block all banner-ad sized images, you could simply add this line of code to your user stylesheet.\n\nimg[width=\"468\"][height=\"60\"],\nimg[width=\"468px\"][height=\"60px\"] {\n display: none !important;\n}\n\nTo hide any banner-ad sized element, such as flash movies, applets or iFrames, simply apply the above rule to every element using the universal selector.\n\n*[width=\"468\"][height=\"60\"], *[width=\"468px\"][height=\"60px\"] {\n display: none !important;\n}\n\nJust bare in mind when using this technique that you may accidentally hide something that isn\u2019t actually an advert; it just happens to be the same size.\n\nThe Interactive Advertising Bureau lists a number of common ad sizes. Using these dimensions, you can create stylesheet that blocks all the popular ad formats. Apply this as a user stylesheet and you never need to suffer another advert again.\n\nHere\u2019s wishing you a Merry, ad-free Christmas.", "year": "2005", "author": "Andy Budd", "author_slug": "andybudd", "published": "2005-12-11T00:00:00+00:00", "url": "https://24ways.org/2005/the-attribute-selector-for-fun-and-no-ad-profit/", "topic": "code"} {"rowid": 332, "title": "CSS Layout Starting Points", "contents": "I build a lot of CSS layouts, some incredibly simple, others that cause sleepless nights and remind me of the torturous puzzle books that were given to me at Christmas by aunties concerned for my education. However, most of the time these layouts fit quite comfortably into one of a very few standard formats. For example:\n\n\n\tLiquid, multiple column with no footer\n\tLiquid, multiple column with footer\n\tFixed width, centred\n\n\nRather than starting out with blank CSS and (X)HTML documents every time you need to build a layout, you can fairly quickly create a bunch of layout starting points, that will give you a solid basis for creating the rest of the design and mean that you don\u2019t have to remember how a three column layout with a footer is best achieved every time you come across one! \n\nThese starting points can be really basic, in fact that\u2019s exactly what you want as the final design, the fonts, the colours and so on will be different every time. It\u2019s just the main sections we want to be able to quickly get into place. For example, here is a basic starting point CSS and XHTML document for a fixed width, centred layout with a footer.\n\n \n\n\n Fixed Width and Centred starting point document\n \n \n\n\n
\n
\n
\n

Sidebar content here

\n
\n
\n
\n
\n

Your main content goes here.

\n
\n
\n
\n
\n

Ho Ho Ho!

\n
\n
\n
\n\n\n\n body {\n text-align: center;\n min-width: 740px;\n padding: 0;\n margin: 0;\n }\n\n #wrapper {\n text-align: left;\n width: 740px;\n margin-left: auto;\n margin-right: auto;\n padding: 0;\n }\n\n #content {\n margin: 0 200px 0 0;\n }\n\n #content .inner {\n padding-top: 1px;\n margin: 0 10px 10px 10px;\n }\n\n #side {\n float: right;\n width: 180px;\n margin: 0;\n }\n\n #side .inner {\n padding-top: 1px;\n margin: 0 10px 10px 10px;\n }\n\n #footer {\n margin-top: 10px;\n clear: both;\n }\n\n #footer .inner {\n margin: 10px;\n }\n\n9 times out of 10, after figuring out exactly what main elements I have in a layout, I can quickly grab the \u2018one I prepared earlier\u2019, mark-up the relevant sections within the ready-made divs, and from that point on, I only need to worry about the contents of those different areas. The actual layout is tried and tested, one that I know works well in different browsers and that is unlikely to throw me any nasty surprises later on. In addition, considering how the layout is going to work first prevents the problem of developing a layout, then realising you need to put a footer on it, and needing to redevelop the layout as the method you have chosen won\u2019t work well with a footer.\n\nWhile enjoying your mince pies and mulled wine during the \u2018quiet time\u2019 between Christmas and New Year, why not create some starting point layouts of your own? The css-discuss Wiki, CSS layouts section is a great place to find examples that you can try out and find your favourite method of creating the various layout types.", "year": "2005", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2005-12-04T00:00:00+00:00", "url": "https://24ways.org/2005/css-layout-starting-points/", "topic": "code"} {"rowid": 331, "title": "Splintered Striper", "contents": "Back in March 2004, David F. Miller demonstrated a little bit of DOM scripting magic in his A List Apart article Zebra Tables.\n\nHis script programmatically adds two alternating CSS background colours to table rows, making them more readable and visually pleasing, while saving the document author the tedious task of manually assigning the styling to large static data tables.\n\nAlthough David\u2019s original script performs its duty well, it is nonetheless very specific and limited in its application. It only:\n\n\n\tworks on a single table, identified by its id, with at least a single tbody section\n\tassigns a background colour\n\tallows two colours for odd and even rows\n\tacts on data cells, rather than rows, and then only if they have no class or background colour already defined\n\n\nTaking it further\n\nIn a recent project I found myself needing to apply a striped effect to a medium sized unordered list. Instead of simply modifying the Zebra Tables code for this particular case, I decided to completely recode the script to make it more generic.\n\nBeing more general purpose, the function in my splintered striper experiment is necessarily more complex. Where the original script only expected a single parameter (the id of the target table), the new function is called as follows:\n\nstriper('[parent element tag]','[parent element class or null]','[child element tag]','[comma separated list of classes]')\n\nThis new, fairly self-explanatory function:\n\n\n\ttargets any type of parent element (and, if specified, only those with a certain class)\n\tassigns two or more classes (rather than just two background colours) to the child elements inside the parent\n\tpreserves any existing classes already assigned to the child elements\n\n\nSee it in action\n\nView the demonstration page for three usage examples. For simplicity\u2019s sake, we\u2019re making the calls to the striper function from the body\u2019s onload attribute. In a real deployment situation, we would look at attaching a behaviour to the onload programmatically \u2014 just remember that, as we need to pass variables to the striper function, this would involve creating a wrapper function which would then be attached\u2026something like:\n\nfunction stripe() {\n striper('tbody','splintered','tr','odd,even');\n}\n\nwindow.onload=stripe;\n\nA final thought\n\nJust because the function is called striper does not mean that it\u2019s limited to purely applying a striped look; as it\u2019s more of a general purpose \u201calternating class assignment\u201d script, you can achieve a whole variety of effects with it.", "year": "2005", "author": "Patrick Lauke", "author_slug": "patricklauke", "published": "2005-12-15T00:00:00+00:00", "url": "https://24ways.org/2005/splintered-striper/", "topic": "code"} {"rowid": 330, "title": "An Explanation of Ems", "contents": "Ems are so-called because they are thought to approximate the size of an uppercase letter M (and so are pronounced emm), although 1em is actually significantly larger than this. The typographer Robert Bringhurst describes the em thus:\n\n\n\tThe em is a sliding measure. One em is a distance equal to the type size. In 6 point type, an em is 6 points; in 12 point type an em is 12 points and in 60 point type an em is 60 points. Thus a one em space is proportionately the same in any size.\n\n\nTo illustrate this principle in terms of CSS, consider these styles:\n\n#box1 {\n font-size: 12px;\n width: 1em;\n height: 1em;\n border:1px solid black;\n}\n\n#box2 {\n font-size: 60px;\n width: 1em;\n height: 1em;\n border: 1px solid black;\n}\n\nThese styles will render like:\n\n M\n\nand\n\n M\n\nNote that both boxes have a height and width of 1em but because they have different font sizes, one box is bigger than the other. Box 1 has a font-size of 12px so its width and height is also 12px; similarly the text of box 2 is set to 60px and so its width and height are also 60px.", "year": "2005", "author": "Richard Rutter", "author_slug": "richardrutter", "published": "2005-12-02T00:00:00+00:00", "url": "https://24ways.org/2005/an-explanation-of-ems/", "topic": "design"} {"rowid": 329, "title": "Broader Border Corners", "contents": "A note from the editors: Since this article was written the CSS border-radius property has become widely supported in browsers. It should be preferred to this image technique.\n \n \n \n A quick and easy recipe for turning those single-pixel borders that the kids love so much into into something a little less right-angled.\n\nHere\u2019s the principle: We have a box with a one-pixel wide border around it. Inside that box is another box that has a little rounded-corner background image sitting snugly in one of its corners. The inner-box is then nudged out a bit so that it\u2019s actually sitting on top of the outer box. If it\u2019s all done properly, that little background image can mask the hard right angle of the default border of the outer-box, giving the impression that it actually has a rounded corner.\n\nTake An Image, Finely Chopped\n\n\n\nAdd A Sprinkle of Markup\n\n
\n

Lorem ipsum etc. etc. etc.

\n
\n\nThrow In A Dollop of CSS\n\n#content { \n border: 1px solid #c03;\n}\n\n#content p {\n background: url(corner.gif) top left no-repeat;\n position: relative;\n left: -1px;\n top: -1px;\n padding: 1em;\n margin: 0;\n}\n\nBubblin\u2019 Hot\n\n\n\tThe content div has a one-pixel wide red border around it.\n\tThe paragraph is given a single instance of the background image, created to look like a one-pixel wide arc.\n\tThe paragraph is shunted outside of the box \u2013 back one pixel and up one pixel \u2013 so that it is sitting over the div\u2019s border. The white area of the image covers up that part of the border\u2019s corner, and the arc meets up with the top and left border.\n\tBecause, in this example, we\u2019re applying a background image to a paragraph, its top margin needs to be zeroed so that it starts at the top of its container.\n\n\nEt voil\u00e0. Bon app\u00e9tit.\n\nExtra Toppings\n\n\n\tIf you want to apply a curve to each one of the corners and you run out of meaningful markup to hook the background images on to, throw some spans or divs in the mix (there\u2019s nothing wrong with this if that\u2019s the effect you truly want \u2013 they don\u2019t hurt anybody) or use some nifty DOM Scripting to put the scaffolding in for you.\n\tNote that if you\u2019ve got more than one of these relative corners, you will need to compensate for the starting position of each box which is nested in an already nudged parent.\n\tYou\u2019re not limited to one pixel wide, rounded corners \u2013 the same principles apply to thicker borders, or corners with different shapes.", "year": "2005", "author": "Patrick Griffiths", "author_slug": "patrickgriffiths", "published": "2005-12-14T00:00:00+00:00", "url": "https://24ways.org/2005/broader-border-corners/", "topic": "design"} {"rowid": 328, "title": "Swooshy Curly Quotes Without Images", "contents": "The problem\n\nTake a quote and render it within blockquote tags, applying big, funky and stylish curly quotes both at the beginning and the end without using any images \u2013 at all.\n\nThe traditional way\n\nFeint background images under the text, or an image in the markup housed in a little float. Often designers only use the opening curly quote as it\u2019s just too difficult to float a closing one.\n\nWhy is the traditional way bad?\n\nWell, for a start there are no actual curly quotes in the text (unless you\u2019re doing some nifty image replacement). Thus with CSS disabled you\u2019ll only have default blockquote styling to fall back on. Secondly, images don\u2019t resize, so scaling text will have no affect on your graphic curlies.\n\nThe solution\n\nUse really big text. Then it can be resized by the browser, resized using CSS, and even be restyled with a new font style if you fancy it. It\u2019ll also make sense when CSS is unavailable.\n\nThe problem\n\nCreating \u201cDrop Caps\u201d with CSS has been around for a while (Big Dan Cederholm discusses a neat solution in that first book of his), but drop caps are normal characters \u2013 the A to Z or 1 to 10 \u2013 and these can all be pulled into a set space and do not serve up a ton of whitespace, unlike punctuation characters.\n\nCurly quotes aren\u2019t like traditional characters. Like full stops, commas and hashes they float within the character space and leave lots of dead white space, making it bloody difficult to manipulate them with CSS. Styles generally fit around text, so cutting into that character is tricky indeed. Also, all that extra white space is going to push into the quote text and make it look pretty uneven. This grab highlights the actual character space:\n\n\n\nSee how this is emphasized when we add a normal alphabetical character within the span. This is what we\u2019re dealing with here:\n\n\n\nThen, there\u2019s size. Call in a curly quote at less than 300% font-size and it ain\u2019t gonna look very big. The white space it creates will be big enough, but the curlies will be way too small. We need more like 700% (as in this example) to make an impression, but that sure makes for a big character space.\n\nPrepare the curlies\n\nFirstly, remove the opening \u201c from the quote. Replace it with the opening curly quote character entity \u201c. Then replace the closing \u201c with the entity reference for that, which is \u201d. Now at least the curlies will look nice and swooshy.\n\nAdd the hooks\n\nTwo reasons why we aren\u2019t using :first-letter pseudo class to manipulate the curlies. Firstly, only CSS2-friendly browsers would get what we\u2019re doing, and secondly we need to affect the last \u201cletter\u201d of our text also \u2013 the closing curly quote.\n\nSo, add a span around the opening curly, and a second span around the closing curly, giving complete control of the characters:\n\n
\u201cSpeech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.\u201d
\n\nSo far nothing will look any different, aside form the curlies looking a bit nicer. I know we\u2019ve just added extra markup, but the benefits as far as accessibility are concerned are good enough for me, and of course there are no images to download.\n\nThe CSS\n\nOK, easy stuff first. Our first rule .bqstart floats the span left, changes the color, and whacks the font-size up to an exuberant 700%. Our second rule .bqend does the same tricks aside from floating the curly to the right.\n\n.bqstart {\n float: left;\n font-size: 700%;\n color: #FF0000;\n }\n\n .bqend {\n float: right;\n font-size: 700%;\n color: #FF0000;\n }\n\nThat gives us this, which is rubbish. I\u2019ve highlighted the actual span area with outlines:\n\n\n\nNote that the curlies don\u2019t even fit inside the span! At this stage on IE 6 PC you won\u2019t even see the quotes, as it only places focus on what it thinks is in the div. Also, the quote text is getting all spangled.\n\nFiddle with margin and padding\n\nThink of that span outline box as a window, and that you need to position the curlies within that window in order to see them. By adding some small adjustments to the margin and padding it\u2019s possible to position the curlies exactly where you want them, and remove the excess white space by defining a height:\n\n.bqstart {\n float: left;\n height: 45px;\n margin-top: -20px;\n padding-top: 45px;\n margin-bottom: -50px;\n font-size: 700%;\n color: #FF0000;\n }\n\n .bqend {\n float: right;\n height: 25px;\n margin-top: 0px;\n padding-top: 45px;\n font-size: 700%;\n color: #FF0000;\n }\n\nI wanted the blocks of my curlies to align with the quote text, whereas you may want them to dig in or stick out more. Be aware however that my positioning works for IE PC and Mac, Firefox and Safari. Too much tweaking seems to break the magic in various browsers at various times. Now things are fitting beautifully:\n\nI must admit that the heights, margins and spacing don\u2019t make a lot of sense if you analyze them. This was a real trial and error job. Get it working on Safari, and IE would fail. Sort IE, and Firefox would go weird.\n\nFinished\n\nThe final thing looks ace, can be resized, looks cool without styles, and can be edited with CSS at any time. Here\u2019s a real example (note that I\u2019m specifying Lucida Grande and then Verdana for my curlies):\n\n \u201cSpeech marks. Curly quotes. That annoying thing cool people do with their fingers to emphasize a buzzword, shortly before you hit them.\u201d\n\nBrowsers happy\n\nAs I said, too much tweaking of margins and padding can break the effect in some browsers. Even now, Firefox insists on dropping the closing curly by approximately 6 or 7 pixels, and if I adjust the padding for that, it\u2019ll crush it into the text on Safari or IE. Weird. Still, as I close now it seems solid through resizing tests on Safari, Firefox, Camino, Opera and IE PC and Mac. Lovely.\n\nIt\u2019s probably not perfect, but together we can beat the evil typographic limitations of the web and walk together towards a brighter, more aligned world. Merry Christmas.", "year": "2005", "author": "Simon Collison", "author_slug": "simoncollison", "published": "2005-12-21T00:00:00+00:00", "url": "https://24ways.org/2005/swooshy-curly-quotes-without-images/", "topic": "business"} {"rowid": 327, "title": "Improving Form Accessibility with DOM Scripting", "contents": "The form label element is an incredibly useful little element \u2013 it lets you link the form field unquestionably with the descriptive label text that sits alongside or above it. This is a very useful feature for people using screen readers, but there are some problems with this element.\n\nWhat happens if you have one piece of data that, for various reasons (validation, the way your data is collected/stored etc), needs to be collected using several form elements?\n\nThe classic example is date of birth \u2013 ideally, you\u2019ll ask for the date of birth once but you may have three inputs, one each for day, month and year, that you also need to provide hints about the format required. The problem is that to be truly accessible you need to label each field. So you end up needing something to say \u201cthis is a date of birth\u201d, \u201cthis is the day field\u201d, \u201cthis is the month field\u201d and \u201cthis is the day field\u201d. Seems like overkill, doesn\u2019t it? And it can uglify a form no end.\n\nThere are various ways that you can approach it (and I think I\u2019ve seen them all). Some people omit the label and rely on the title attribute to help the user through; others put text in a label but make the text 1 pixel high and merging in to the background so that screen readers can still get that information. The most common method, though, is simply to set the label to not display at all using the CSS display:none property/value pairing (a technique which, for the time being, seems to work on most screen readers). But perhaps we can do more with this?\n\nThe technique I am suggesting as another alternative is as follows (here comes the pseudo-code):\n\n\n\tStart with a totally valid and accessible form\n\tEnsure that each form input has a label that is linked to its related form control\n\tApply a class to any label that you don\u2019t want to be visible (for example superfluous)\n\n\nThen, through the magic of unobtrusive JavaScript/the DOM, manipulate the page as follows once the page has loaded:\n\n\n\tFind all the label elements that are marked as superfluous and hide them\n\tFind out what input element each of these label elements is related to\n\tThen apply a hint about formatting required for input (gleaned from the original, now-hidden label text) \u2013 add it to the form input as default text\n\tFinally, add in a behaviour that clears or selects the default text (as you choose)\n\n\nSo, here\u2019s the theory put into practice \u2013 a date of birth, grouped using a fieldset, and with the behaviours added in using DOM, and here\u2019s the JavaScript that does the heavy lifting. \n\nBut why not just use display:none? As demonstrated at Juicy Studio, display:none seems to work quite well for hiding label elements. So why use a sledge hammer to crack a nut? In all honesty, this is something of an experiment, but consider the following:\n\n\n\tUsing the DOM, you can add extra levels of help, potentially across a whole form \u2013 or even range of forms \u2013 without necessarily increasing your markup (it goes beyond simply hiding labels)\n\tScreen readers today may identify a label that is set not to display, but they may not in the future \u2013 this might provide a way around\n\tBy expanding this technique above, it might be possible to visually change the parent container that groups these items \u2013 in this case, a fieldset and legend, which are notoriously difficult to style consistently across different browsers \u2013 while still retaining the underlying semantic/logical structure\n\n\nWell, it\u2019s an idea to think about at least. How is it for you? How else might you use DOM scripting to improve the accessiblity or usability of your forms?", "year": "2005", "author": "Ian Lloyd", "author_slug": "ianlloyd", "published": "2005-12-03T00:00:00+00:00", "url": "https://24ways.org/2005/improving-form-accessibility-with-dom-scripting/", "topic": "code"} {"rowid": 326, "title": "Don't be eval()", "contents": "JavaScript is an interpreted language, and like so many of its peers it includes the all powerful eval() function. eval() takes a string and executes it as if it were regular JavaScript code. It\u2019s incredibly powerful and incredibly easy to abuse in ways that make your code slower and harder to maintain. As a general rule, if you\u2019re using eval() there\u2019s probably something wrong with your design.\n\nCommon mistakes\n\nHere\u2019s the classic misuse of eval(). You have a JavaScript object, foo, and you want to access a property on it \u2013 but you don\u2019t know the name of the property until runtime. Here\u2019s how NOT to do it:\n\nvar property = 'bar';\nvar value = eval('foo.' + property);\n\nYes it will work, but every time that piece of code runs JavaScript will have to kick back in to interpreter mode, slowing down your app. It\u2019s also dirt ugly.\n\nHere\u2019s the right way of doing the above:\n\nvar property = 'bar';\nvar value = foo[property];\n\nIn JavaScript, square brackets act as an alternative to lookups using a dot. The only difference is that square bracket syntax expects a string.\n\nSecurity issues\n\nIn any programming language you should be extremely cautious of executing code from an untrusted source. The same is true for JavaScript \u2013 you should be extremely cautious of running eval() against any code that may have been tampered with \u2013 for example, strings taken from the page query string. Executing untrusted code can leave you vulnerable to cross-site scripting attacks.\n\nWhat\u2019s it good for?\n\nSome programmers say that eval() is B.A.D. \u2013 Broken As Designed \u2013 and should be removed from the language. However, there are some places in which it can dramatically simplify your code. A great example is for use with XMLHttpRequest, a component of the set of tools more popularly known as Ajax. XMLHttpRequest lets you make a call back to the server from JavaScript without refreshing the whole page. A simple way of using this is to have the server return JavaScript code which is then passed to eval(). Here is a simple function for doing exactly that \u2013 it takes the URL to some JavaScript code (or a server-side script that produces JavaScript) and loads and executes that code using XMLHttpRequest and eval().\n\nfunction evalRequest(url) {\n var xmlhttp = new XMLHttpRequest();\n xmlhttp.onreadystatechange = function() {\n if (xmlhttp.readyState==4 && xmlhttp.status==200) {\n eval(xmlhttp.responseText);\n }\n }\n xmlhttp.open(\"GET\", url, true);\n xmlhttp.send(null);\n }\n\nIf you want this to work with Internet Explorer you\u2019ll need to include this compatibility patch.", "year": "2005", "author": "Simon Willison", "author_slug": "simonwillison", "published": "2005-12-07T00:00:00+00:00", "url": "https://24ways.org/2005/dont-be-eval/", "topic": "code"} {"rowid": 325, "title": "\"Z's not dead baby, Z's not dead\"", "contents": "While Mr. Moll and Mr. Budd have pipped me to the post with their predictions for 2006, I\u2019m sure they won\u2019t mind if I sneak in another. The use of positioning together with z-index will be one of next year\u2019s hot techniques \n\nBoth has been a little out of favour recently. For many, positioned layouts made way for the flexibility of floats. Developers I speak to often associate z-index with Dreamweaver\u2019s layers feature. But in combination with alpha transparency support for PNG images in IE7 and full implementation of position property values, the stacking of elements with z-index is going to be big. I\u2019m going to cover the basics of z-index and how it can be used to create designs which \u2018break out of the box\u2019.\n\nNo positioning? No Z!\n\nRemember geometry? The x axis represents the horizontal, the y axis represents the vertical. The z axis, which is where we get the z-index, represents /depth/. Elements which are stacked using z-index are stacked from front to back and z-index is only applied to elements which have their position property set to relative or absolute. No positioning, no z-index. Z-index values can be either negative or positive and it is the element with the highest z-index value appears closest to the viewer, regardless of its order in the source. Furthermore, if more than one element are given the same z-index, the element which comes last in source order comes out top of the pile. \n\nLet\u2019s take three
s.\n\n
\n
\n
\n\n #one { \n position: relative; \n z-index: 3;\n }\n\n #two { \n position: relative; \n z-index: 1;\n }\n\n #three {\n position: relative; \n z-index: 2;\n }\n\n\n\nAs you can see, the
with the z-index of 3 will appear closest, even though it comes before its siblings in the source order. As these three
s have no defined positioning context in the form of a positioned parent such as a
, their stacking order is defined from the root element . Simple stuff, but these building blocks are the basis on which we can create interesting interfaces (particularly when used in combination with image replacement and transparent PNGs).\n\nBrand building\n\nNow let\u2019s take three more basic elements, an

,
and

, all inside a branding

which acts a new positioning context. By enclosing them inside a positioned parent, we establish a new stacking order which is independent of either the root element or other positioning contexts on the page.\n\n
\n

Worrysome.com

\n

Don' worry 'bout a thing...

\n

Take the weight of the world off your shoulders.

\n
\n\nApplying a little positioning and z-index magic we can both set the position of these elements inside their positioning context and their stacking order. As we are going to use background images made from transparent PNGs, each element will allow another further down the stacking order to show through. This makes for some novel effects, particularly in liquid layouts.\n\n(Ed: We\u2019re using n below to represent whatever values you require for your specific design.) \n\n#branding {\n position: relative;\n width: n;\n height: n;\n background-image: url(n);\n }\n\n #branding>h1 {\n position: absolute;\n left: n;\n top: n;\n width: n;\n height: n;\n background-image: url(h1.png);\n text-indent: n;\n }\n\n #branding>blockquote {\n position: absolute;\n left: n;\n top: n;\n width: n;\n height: n;\n background-image: url(bq.png);\n text-indent: n;\n\n }\n\n #branding>p {\n position: absolute;\n right: n;\n top: n;\n width: n;\n height: n;\n background-image: url(p.png);\n text-indent: n;\n }\n\nNext we can begin to see how the three elements build upon each other.\n\n\n1. Elements outlined\n\n\n2. Positioned elements overlayed to show context\n\n\n3. Our final result\n\nMultiple stacking orders\n\nNot only can elements within a positioning context be given a z-index, but those positioning contexts themselves can also be stacked.\n\n\nTwo positioning contexts, each with their own stacking order\n\nInterestingly each stacking order is independent of that of either the root element or its siblings and we can exploit this to make complex layouts from just a few semantic elements. This technique was used heavily on my recent redesign of Karova.com.\n\nDissecting part of Karova.com\n\nFirst the XHTML. The default template markup used for the site places
and
as siblings inside their container.\n\n
\n
\n

\n

\n
\n
\n
\n\nBy giving the navigation
a lower z-index than the content
we can ensure that the positioned content elements will always appear closest to the viewer, despite the fact that the navigation comes after the content in the source.\n\n#content { \n position: relative; \n z-index: 2; \n }\n\n #nav_main { \n position: absolute; \n z-index: 1; \n }\n\nNow applying absolute positioning with a negative top value to the

and a higher z-index value than the following

ensures that the header sits not only on top of the navigation but also the styled paragraph which follows it.\n\nh2 {\n position: absolute;\n z-index: 200;\n top: -n;\n }\n\n h2+p {\n position: absolute;\n z-index: 100;\n margin-top: -n;\n padding-top: n;\n }\n\n\nDissecting part of Karova.com\n\nYou can see the full effect in the wild on the Karova.com site.\n\nHave a great holiday season!", "year": "2005", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2005-12-16T00:00:00+00:00", "url": "https://24ways.org/2005/zs-not-dead-baby-zs-not-dead/", "topic": "design"} {"rowid": 324, "title": "Debugging CSS with the DOM Inspector", "contents": "An Inspector Calls\n\nThe larger your site and your CSS becomes, the more likely that you will run into bizarre, inexplicable problems. Why does that heading have all that extra padding? Why is my text the wrong colour? Why does my navigation have a large moose dressed as Noel Coward on top of all the links? \n\nPerhaps you work in a collaborative environment, where developers and other designers are adding code? In which case, the likelihood of CSS strangeness is higher.\n\nYou need to debug. You need Firefox\u2019s wise-guy know-it-all, the DOM Inspector. \n\nThe DOM Inspector knows where everything is in your layout, and more importantly, what causes it to look the way it does. So without further ado, load up any css based site in your copy of Firefox (or Flock for that matter), and launch the DOM Inspector from the Tools menu.\n\nThe inspector uses two main panels \u2013 the left to show the DOM tree of the page, and the right to show you detail:\n\n\n\nThe Inspector will look at whatever site is in the front-most window or tab, but you can also use it without another window. Type in a URL at the top (A), press \u2018Inspect\u2019 (B) and a third panel appears at the bottom, with the browser view. I find this layout handier than looking at a window behind the DOM Inspector.\n\nStep 1 \u2013 find your node!\n\nEach element on your page \u2013 be it a HTML tag or a piece of text, is called a \u2018node\u2019 of the DOM tree. These nodes are all listed in the left hand panel, with any ID or CLASS attribute values next to them. When you first look at a page, you won\u2019t see all those yet. Nested HTML elements (such as a link inside a paragraph) have a reveal triangle next to their name, clicking this takes you one level further down. \n\nThis can be fine for finding the node you want to look at, but there are easier ways. Say you have a complex rounded box technique that involves 6 nested DIVs? You\u2019d soon get tired of clicking all those triangles to find the element you want to inspect. Click the top left icon \u00a9 \u2013 \u201cFind a node to inspect by clicking on it\u201d and then select the area you want to inspect. Boom! All that drilling down the DOM tree has been done for you! Huzzah!\n\nIf you\u2019re looking for an element that you know has an ID (such as

    ), or a specific HTML tag or attribute, click the binoculars icon (D) for \u201cFinds a node to inspect by ID, tag or attribute\u201d (You can also use Ctrl-F or Apple-F to do this if the DOM Inspector is the frontmost window) :\n\n\n\nType in the name and Bam! You\u2019re there! Pressing F3 will take you to any other instances. Notice also, that when you click on a node in the inspector, it highlights where it is in the browser view with a flashing red border!\n\nNow that we\u2019ve found the troublesome node on the page, we can find out what\u2019s up with it\u2026\n\nStep 2 \u2013 Debug that node!\n\nOnce the node is selected, we move over to the right hand panel. Choose \u2018CSS Style Rules\u2019 from the document menu (E), and all the CSS rules that apply to that node are revealed, in the order that they load:\n\n\n\nYou\u2019ll notice that right at the top, there is a CSS file you might not recognise, with a file path beginning with \u201cresource://\u201d. This is the browsers default CSS, that creates the basic rendering. You can mostly ignore this, especially if use the star selector method of resetting default browser styles. \n\nYour style sheets come next. See how helpful it is? It even tells you the line number where to find the related CSS rules! If you use CSS shorthand, you\u2019ll notice that the values are split up (e.g margin-left, margin-right etc.). \n\nNow that you can see all the style rules affecting that node, the rest is up to you! Happy debugging!", "year": "2005", "author": "Jon Hicks", "author_slug": "jonhicks", "published": "2005-12-22T00:00:00+00:00", "url": "https://24ways.org/2005/debugging-css-with-the-dom-inspector/", "topic": "process"} {"rowid": 323, "title": "Introducing UDASSS!", "contents": "Okay. What\u2019s that mean?\n\nUnobtrusive Degradable Ajax Style Sheet Switcher!\n\nBoy are you in for treat today \u2018cause we\u2019re gonna have a whole lotta Ajaxifida Unobtrucitosity CSS swappin\u2019 Fun!\n\nOkay are you really kidding? Nope. I\u2019ve even impressed myself on this one. Unfortunately, I don\u2019t have much time to tell you the ins and outs of what I actually did to get this to work. We\u2019re talking JavaScript, CSS, PHP\u2026Ajax. But don\u2019t worry about that. I\u2019ve always believed that a good A.P.I. is an invisible A.P.I\u2026 and this I felt I achieved. The only thing you need to know is how it works and what to do.\n\nA Quick Introduction Anyway\u2026\n\nFirst of all, the idea is very simple. I wanted something just like what Paul Sowden put together in \nAlternative Style: Working With Alternate Style Sheets from Alistapart Magazine EXCEPT a few minor (not-so-minor actually) differences which I\u2019ve listed briefly below:\n\n\n\n\tAllow users to switch styles without JavaScript enabled (degradable)\n\tPreventing the F.O.U.C. before the window \u2018load\u2019 when getting preferred styles\n\tKeep the JavaScript entirely off our markup (no onclick\u2019s or onload\u2019s)\n\tMake it very very easy to implement (ok, Paul did that too)\n\n\nWhat I did to achieve this was used server-side cookies instead of JavaScript cookies. Hence, PHP. However this isn\u2019t a \u201cPHP style switcher\u201d \u2013 which is where Ajax comes in. For the extreme technical folks, no, there is no xml involved here, or even a callback response. I only say Ajax because everyone knows what \u2018it\u2019 means. With that said, it\u2019s the Ajax that sets the cookies \u2018on the fly\u2019. Got it? Awesome!\n\nWhat you need\n\nLuckily, I\u2019ve done the work for you. It\u2019s all packaged up in a nice zip file (at the end\u2026keep reading for now) \u2013 so from here on out, \njust follow these instructions\n\nAs I\u2019ve mentioned, one of the things we\u2019ll be working with is PHP. So, first things first, open up a file called index and save it with a \u2018.php\u2019 extension.\n\nNext, place the following text at the top of your document (even above your DOCTYPE)\n\nadd('css/global.css','screen,projection'); // [Global Styles]\n $styleSheet->add('css/preferred.css','screen,projection','Wog Standard'); // [Preferred Styles]\n $styleSheet->add('css/alternate.css','screen,projection','Tiny Fonts',true); // [Alternate Styles]\n $styleSheet->add('css/alternate2.css','screen,projection','Big O Fonts',true); // // [Alternate Styles]\n $styleSheet->getPreferredStyles();\n ?>\n\nThe way this works is REALLY EASY. Pay attention closely.\n\nNotice in the first line we\u2019ve included our style-switcher.php file.\n\nNext we instantiate a PHP class called AlternateStyles() which will allow us to configure our style sheets. \nSo for kicks, let\u2019s just call our object $styleSheet\n\nAs part of the AlternateStyles object, there lies a public method called add. So naturally with our $styleSheet object, we can call it to (da \u2013 da-da-da!) Add Style Sheets!\n\nHow the add() method works\n\nThe add method takes in a possible four arguments, only one is required. However, you\u2019ll want to add some\u2026 since the whole point is working with alternate style sheets.\n\n$path can simply be a uri, absolute, or relative path to your style sheet. $media adds a media attribute to your style sheets. $title gives a name to your style sheets (via title attribute).$alternate (which shows boolean) simply tells us that these are the alternate style sheets.\n\nadd() Tips\n\nFor all global style sheets (meaning the ones that will always be seen and will not be swapped out), simply use the add method as shown next to // [Global Styles].\n\nTo add preferred styles, do the same, but add a \u2018title\u2019.\n\nTo add the alternate styles, do the same as what we\u2019ve done to add preferred styles, but add the extra boolean and set it to true.\n\nNote following when adding style sheets\n\n\n\tMultiple global style sheets are allowed\n\tYou can only have one preferred style sheet (That\u2019s a browser rule)\n\tFeel free to add as many alternate style sheets as you like\n\n\nMoving on\n\nSimply add the following snippet to the of your web document:\n\n\n \n \n drop();\n ?>\n\nNothing much to explain here. Just use your copy & paste powers.\n\nHow to Switch Styles\n\nWhether you knew it or not, this baby already has the built in \u2018ubobtrusive\u2019 functionality that lets you switch styles by the drop of any link with a class name of \u2018altCss\u2018. Just drop them where ever you like in your document as follows:\n\nBog Standard\n Small Fonts\n Large Fonts\n\nTake special note where the file is linking to. Yep. Just linking right back to the page we\u2019re on. The only extra parameters we pass in is a variable called \u2018css\u2019 \u2013 and within that we append the names of our style sheets.\n\nAlso take very special note on the names of the style sheets have an under_score to take place of any spaces we might have.\n\nGo ahead\u2026 play around and change the style sheet on the example page. Try disabling JavaScript and refreshing your browser. Still works!\n\nCool eh?\n\nWell, I put this together in one night so it\u2019s still a work in progress and very beta. If you\u2019d like to hear more about it and its future development, be sure stop on by my site where I\u2019ll definitely be maintaining it.\n\nDownload the beta anyway\n\nWell this wouldn\u2019t be fun if there was nothing to download. So we\u2019re hooking you up so you don\u2019t go home (or logoff) unhappy\n\n Download U.D.A.S.S.S | V0.8\n\nMerry Christmas!\n\nThanks for listening and I hope U.D.A.S.S.S. has been well worth your time and will bring many years of Ajaxy Style Switchin\u2019 Fun!\n\nMany Blessings, Merry Christmas and have a great new year!", "year": "2005", "author": "Dustin Diaz", "author_slug": "dustindiaz", "published": "2005-12-18T00:00:00+00:00", "url": "https://24ways.org/2005/introducing-udasss/", "topic": "code"} {"rowid": 322, "title": "Introduction to Scriptaculous Effects", "contents": "Gather around kids, because this year, much like in that James Bond movie with Denise Richards, Christmas is coming early\u2026 in the shape of scrumptuous smooth javascript driven effects at your every whim.\n\nNow what I\u2019m going to do, is take things down a notch. Which is to say, you don\u2019t need to know much beyond how to open a text file and edit it to follow this article. Personally, I for instance can\u2019t code to save my life.\n\nWell, strictly speaking, that\u2019s not entirely true. If my life was on the line, and the code needed was really simple and I wasn\u2019t under any time constraints, then yeah maybe I could hack my way out of it\n\nBut my point is this: I\u2019m not a programmer in the traditional sense of the word. In fact, what I do best, is scrounge code off of other people, take it apart and then put it back together with duct tape, chewing gum and dumb blind luck.\n\nNo, don\u2019t run! That happens to be a good thing in this case. You see, we\u2019re going to be implementing some really snazzy effects (which are considerably more relevant than most people are willing to admit) on your site, and we\u2019re going to do it with the aid of Thomas Fuchs\u2019 amazing Script.aculo.us library. And it will be like stealing candy from a child.\n\nWhat Are We Doing?\n\nI\u2019m going to show you the very basics of implementing the Script.aculo.us javascript library\u2019s Combination Effects. These allow you to fade elements on your site in or out, slide them up and down and so on.\n\nWhy Use Effects at All?\n\nBefore get started though, let me just take a moment to explain how I came to see smooth transitions as something more than smoke and mirror-like effects included for with little more motive than to dazzle and make parents go \u2018uuh, snazzy\u2019.\n\nEarlier this year, I had the good fortune of meeting the kind, gentle and quite knowledgable Matt Webb at a conference here in Copenhagen where we were both speaking (though I will be the first to admit my little talk on Open Source Design was vastly inferior to Matt\u2019s talk). Matt held a talk called Fixing Broken Windows (based on the Broken Windows theory), which really made an impression on me, and which I have since then referred back to several times.\n\nYou can listen to it yourself, as it\u2019s available from Archive.org. Though since Matt\u2019s session uses many visual examples, you\u2019ll have to rely on your imagination for some of the examples he runs through during it. Also, I think it looses audio for a few seconds every once in a while.\n\nAnyway, one of the things Matt talked a lot about, was how our eyes are wired to react to movement. The world doesn\u2019t flickr. It doesn\u2019t disappear or suddenly change and force us to look for the change. Things move smoothly in the real world. They do not pop up.\n\nHow it Works\n\nOnce the necessary files have been included, you trigger an effect by pointing it at the ID of an element. Simple as that.\n\nImplementing the Effects\n\nSo now you know why I believe these effects have a place in your site, and that\u2019s half the battle. Because you see, actually getting these effects up and running, is deceptively simple.\n\nFirst, go and download the latest version of the library (as of this writing, it\u2019s version 1.5 rc5). Unzip itand open it up.\n\nNow we\u2019re going to bypass the instructions in the readme file. Script.aculo.us can do a bunch of quite advanced things, but all we really want from it is its effects. And by sidestepping the rest of the features, we can shave off roughly 80KB of unnecessary javascript, which is well worth it if you ask me.\n\nAs with Drew\u2019s article on Easy Ajax with Prototype, script.aculo.us also uses the Prototype framework by Sam Stephenson. But contrary to Drew\u2019s article, you don\u2019t have to download Prototype, as a version comes bundled with script.aculo.us (though feel free to upgrade to the latest version if you so please).\n\nSo in the unzipped folder, containing the script.aculo.us files and folder, go into \u2018lib\u2019 and grab the \u2018prototype.js\u2019 file. Move it to whereever you want to store the javascript files. Then fetch the \u2018effects.js\u2019 file from the \u2018src\u2019 folder and put it in the same place.\n\nTo make things even easier for you to get this up and running, I have prepared a small javascript snippet which does some checking to see what you\u2019re trying to do. The script.aculo.us effects are all either \u2018turn this off\u2019 or \u2018turn this on\u2019. What this snippet does, is check to see what state the target currently has (is it on or off?) and then use the necessary effect.\n\nYou can either skip to the end and download the example code, or copy and paste this code into a file manually (I\u2019ll refer to that file as combo.js):\n\nEffect.OpenUp = function(element) {\n element = $(element);\n new Effect.BlindDown(element, arguments[1] || {});\n }\n\n Effect.CloseDown = function(element) {\n element = $(element);\n new Effect.BlindUp(element, arguments[1] || {});\n }\n\n Effect.Combo = function(element) {\n element = $(element);\n if(element.style.display == 'none') { \n new Effect.OpenUp(element, arguments[1] || {}); \n }else { \n new Effect.CloseDown(element, arguments[1] || {}); \n }\n }\n\nCurrently, this code uses the BlindUp and BlindDown code, which I personally like, but there\u2019s nothing wrong with you changing the effect-type into one of the other effects available.\n\nNow, include the three files in the header of your code, like so:\n\n\n\n\n\nNow insert the element you want to use the effect on, like so:\n\n
    Lorem ipsum dolor sit amet.
    \n\nThe above element will start out invisible, and when triggered will be revealed. If you want it to start visible, simply remove the style parameter.\n\nAnd now for the trigger \n\nClick Here\n\nAnd that, is pretty much it. Clicking the link should unfold the DIV targeted by the effect, in this case \u2018content\u2019.\n\nEffect Options\n\nNow, it gets a bit long-haired though. The documentation for script.aculo.us is next to non-existing, and because of that you\u2019ll have to do some digging yourself to appreciate the full potentialof these effects.\n\nFirst of all, what kind of effects are available? Well you can go to the demo page and check them out, or you can open the \u2018effects.js\u2019 file and have a look around, something I recommend doing regardlessly, to gain an overview of what exactly you\u2019re dealing with.\n\nIf you dissect it for long enough, you can even distill some of the options available for the various effects. In the case of the BlindUp and BlindDown effect, which we\u2019re using in our example (as triggered from combo.js), one of the things that would be interesting to play with would be the duration of the effect. If it\u2019s too long, it will feel slow and unresponsive. Too fast and it will be imperceptible.\n\nYou set the options like so:\n\nClick Here\n\nThe change from the previous link being the inclusion of , {duration: .2}. In this case, I have lowered the duration to 0.2 second, to really make it feel snappy.\n\nYou can also go all-out and turn on all the bells and whistles of the Blind effect like so (slowed down to a duration of three seconds so you can see what\u2019s going on):\n\nClick Here\n\nConclusion\n\nAnd that\u2019s pretty much it. The rest is a matter of getting to know the rest of the effects and their options as well as finding out just when and where to use them. Remember the ancient Chinese saying: Less is more.\n\nDownload Example\n\nI have prepared a very basic example, which you can download and use as a reference point.", "year": "2005", "author": "Michael Heilemann", "author_slug": "michaelheilemann", "published": "2005-12-12T00:00:00+00:00", "url": "https://24ways.org/2005/introduction-to-scriptaculous-effects/", "topic": "code"} {"rowid": 321, "title": "Tables with Style", "contents": "It might not seem like it but styling tabular data can be a lot of fun. From a semantic point of view, there are plenty of elements to tie some style into. You have cells, rows, row groups and, of course, the table element itself. Adding CSS to a paragraph just isn\u2019t as exciting.\n\nWhere do I start?\n\nFirst, if you have some tabular data (you know, like a spreadsheet with rows and columns) that you\u2019d like to spiffy up, pop it into a table \u2014 it\u2019s rightful place!\n\nTo add more semantics to your table \u2014 and coincidentally to add more hooks for CSS \u2014 break up your table into row groups. There are three types of row groups: the header (thead), the body (tbody) and the footer (tfoot). You can only have one header and one footer but you can have as many table bodies as is appropriate.\n\nSample table example\n\nInspiration\n\nTable Striping\n\nTo improve scanning information within a table, a common technique is to style alternating rows. Also known as zebra tables. Whether you apply it using a class on every other row or turn to JavaScript to accomplish the task, a handy-dandy trick is to use a semi-transparent PNG as your background image. This is especially useful over patterned backgrounds. \n\ntbody tr.odd td {\n background:transparent url(background.png) repeat top left;\n }\n\n * html tbody tr.odd td {\n background:#C00;\n filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(\n src='background.png', sizingMethod='scale');\n }\n\nWe turn off the default background and apply our PNG hack to have this work in Internet Explorer. \n\nStyling Columns\n\nDid you know you could style a column? That\u2019s right. You can add special column (col) or column group (colgroup) elements. With that you can add border or background styles to the column.\n\n\n \n \n \n ...\n\nCheck out the example.\n\nFun with Backgrounds\n\nPop in a tiled background to give your table some character! Internet Explorer\u2019s PNG hack unfortunately only works well when applied to a cell.\n\nTo figure out which background will appear over another, just remember the hierarchy:\n\n (bottom) Table \u2192 Column \u2192 Row Group \u2192 Row \u2192 Cell (top)\n\nThe Future is Bright\n\nOnce browser-makers start implementing CSS3, we\u2019ll have more power at our disposal. Just with :first-child and :last-child, you can pull off a scalable version of our previous table with rounded corners and all \u2014 unfortunately, only Firefox manages to pull this one off successfully. And the selector the masses are clamouring for, nth-child, will make zebra tables easy as eggnog.", "year": "2005", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2005-12-19T00:00:00+00:00", "url": "https://24ways.org/2005/tables-with-style/", "topic": "code"} {"rowid": 320, "title": "DOM Scripting Your Way to Better Blockquotes", "contents": "Block quotes are great. I don\u2019t mean they\u2019re great for indenting content \u2013 that would be an abuse of the browser\u2019s default styling. I mean they\u2019re great for semantically marking up a chunk of text that is being quoted verbatim. They\u2019re especially useful in blog posts. \n\n
    \n

    Progressive Enhancement, as a label for a strategy for Web design, \n was coined by Steven Champeon in a series of articles and presentations \n for Webmonkey and the SxSW Interactive conference.

    \n
    \n\nNotice that you can\u2019t just put the quoted text directly between the
    tags. In order for your markup to be valid, block quotes may only contain block-level elements such as paragraphs.\n\nThere is an optional cite attribute that you can place in the opening
    tag. This should contain a URL containing the original text you are quoting:\n\n
    \n

    Progressive Enhancement, as a label for a strategy for Web design, \n was coined by Steven Champeon in a series of articles and presentations \n for Webmonkey and the SxSW Interactive conference.

    \n
    \n\nGreat! Except\u2026 the default behavior in most browsers is to completely ignore the cite attribute. Even though it contains important and useful information, the URL in the cite attribute is hidden.\n\nYou could simply duplicate the information with a hyperlink at the end of the quoted text:\n\n
    \n

    Progressive Enhancement, as a label for a strategy for Web design, \n was coined by Steven Champeon in a series of articles and presentations \n for Webmonkey and the SxSW Interactive conference.

    \n

    \n source\n

    \n
    \n\nBut somehow it feels wrong to have to write out the same URL twice every time you want to quote something. It could also get very tedious if you have a lot of quotes.\n\nWell, \u201ctedious\u201d is no problem to a programming language, so why not use a sprinkling of DOM Scripting? Here\u2019s a plan for generating an attribution link for every block quote with a cite attribute:\n\n\n\tWrite a function called prepareBlockquotes.\n\tBegin by making sure the browser understands the methods you will be using.\n\tGet all the blockquote elements in the document.\n\tStart looping through each one.\n\tGet the value of the cite attribute.\n\tIf the value is empty, continue on to the next iteration of the loop.\n\tCreate a paragraph.\n\tCreate a link.\n\tGive the paragraph a class of \u201cattribution\u201d.\n\tGive the link an href attribute with the value from the cite attribute.\n\tPlace the text \u201csource\u201d inside the link.\n\tPlace the link inside the paragraph.\n\tPlace the paragraph inside the block quote.\n\tClose the for loop.\n\tClose the function.\n\n\nHere\u2019s how that translates to JavaScript:\n\nfunction prepareBlockquotes() {\n if (!document.getElementsByTagName || !document.createElement || !document.appendChild) return;\n var quotes = document.getElementsByTagName(\"blockquote\");\n for (var i=0; i tags.\n\nYou can style the attribution link using CSS. It might look good aligned to the right with a smaller font size.\n\nIf you\u2019re looking for something to do to keep you busy this Christmas, I\u2019m sure that this function could be greatly improved. Here are a few ideas to get you started:\n\n\n\tShould the text inside the generated link be the URL itself?\n\tIf the block quote has a title attribute, how would you take its value and use it as the text inside the generated link?\n\tShould the attribution paragraph be placed outside the block quote? If so, how would you that (remember, there is an insertBefore method but no insertAfter)?\n\tCan you think of other instances of useful information that\u2019s locked away inside attributes? Access keys? Abbreviations?", "year": "2005", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2005-12-05T00:00:00+00:00", "url": "https://24ways.org/2005/dom-scripting-your-way-to-better-blockquotes/", "topic": "code"} {"rowid": 319, "title": "Avoiding CSS Hacks for Internet Explorer", "contents": "Back in October, IEBlog issued a call to action, asking developers to clean up their CSS hacks for IE7 testing. Needless to say, a lot of hubbub ensued\u2026 both on IEBlog and elsewhere. My contribution to all of the noise was to suggest that developers review their code and use good CSS hacks. But what makes a good hack?\n\nTantek \u00c7elik, the Godfather of CSS hacks, gave us the answer by explaining how CSS hacks should be designed. In short, they should (1) be valid, (2) target only old/frozen/abandoned user-agents/browsers, and (3) be ugly. Tantek also went on to explain that using a feature of CSS is not a hack.\n\nNow, I\u2019m not a frequent user of CSS hacks, but Tantek\u2019s post made sense to me. In particular, I felt it gave developers direction on how we should be coding to accommodate that sometimes troublesome browser, Internet Explorer. But what I\u2019ve found, through my work with other developers, is that there is still much confusion over the use of CSS hacks and IE. Using examples from the code I\u2019ve seen recently, allow me to demonstrate how to clean up some IE-specific CSS hacks.\n\nThe two hacks that I\u2019ve found most often in the code I\u2019ve seen and worked with are the star html bug and the underscore hack. We know these are both IE-specific by checking Kevin Smith\u2019s CSS Filters chart. Let\u2019s look at each of these hacks and see how we can replace them with the same CSS feature-based solution.\n\nThe star html bug\n\nThis hack violates Tantek\u2019s second rule as it targets current (and future) UAs. I\u2019ve seen this both as a stand alone rule, as well as an override to some other rule in a style sheet. Here are some code samples:\n\n* html div#header {margin-top:-3px;}\n.promo h3 {min-height:21px;}\n* html .promo h3 {height:21px;}\n\nThe underscore hack\n\nThis hack violates Tantek\u2019s first two rules: it\u2019s invalid (according to the W3C CSS Validator) and it targets current UAs. Here\u2019s an example:\n\nol {padding:0; _padding-left:5px;}\n\nUsing child selectors\n\nWe can use the child selector to replace both the star html bug and underscore hack. Here\u2019s how:\n\n Write rules with selectors that would be successfully applied to all browsers. This may mean starting with no declarations in your rule!\n\ndiv#header {}\n.promo h3 {}\nol {padding:0;}\n\n \nTo these rules, add the IE-specific declarations.\n\ndiv#header {margin-top:-3px;}\n.promo h3 {height:21px;}\nol {padding:0 0 0 5px;}\n\n \nAfter each rule, add a second rule. The selector of the second rule must use a child selector. In this new rule, correct any IE-specific declarations previously made.\n\ndiv#header {margin-top:-3px;}\nbody > div#header {margin-top:0;}\n\n.promo h3 {height:21px;}\n.promo > h3 {height:auto; min-height:21px;}\n\nol {padding:0 0 0 5px;}\nhtml > body ol {padding:0;}\n\n \n\nVoil\u00e0 \u2013 no more hacks! There are a few caveats to this that I won\u2019t go into\u2026 but assuming you\u2019re operating in strict mode and barring any really complicated stuff you\u2019re doing in your code, your CSS will still render perfectly across browsers. And while this may make your CSS slightly heftier in size, it should future-proof it for IE7 (or so I hope). Happy holidays!", "year": "2005", "author": "Kimberly Blessing", "author_slug": "kimberlyblessing", "published": "2005-12-17T00:00:00+00:00", "url": "https://24ways.org/2005/avoiding-css-hacks-for-internet-explorer/", "topic": "code"} {"rowid": 318, "title": "Auto-Selecting Navigation", "contents": "In the article Centered Tabs with CSS Ethan laid out a tabbed navigation system which can be centred on the page. A frequent requirement for any tab-based navigation is to be able to visually represent the currently selected tab in some way.\n\nIf you\u2019re using a server-side language such as PHP, it\u2019s quite easy to write something like class=\"selected\" into your markup, but it can be even simpler than that.\n\nLet\u2019s take the navigation div from Ethan\u2019s article as an example.\n\n
    \n \n
    \n\nAs you can see we have a standard unordered list which is then styled with CSS to look like tabs. By giving each tab a class which describes it\u2019s logical section of the site, if we were to then apply a class to the body tag of each page showing the same, we could write a clever CSS selector to highlight the correct tab on any given page. \n\nSound complicated? Well, it\u2019s not a trivial concept, but actually applying it is dead simple.\n\nModifying the markup\n\nFirst thing is to place a class name on each li in the list:\n\n
    \n \n
    \n\nThen, on each page of your site, apply the a matching class to the body tag to indicate which section of the site that page is in. For example, on your About page:\n\n...\n\nWriting the CSS selector\n\nYou can now write a single CSS rule to match the selected tab on any given page. The logic is that you want to match the \u2018about\u2019 tab on the \u2018about\u2019 page and the \u2018products\u2019 tab on the \u2018products\u2019 page, so the selector looks like this:\n\nbody.home #navigation li.home,\n body.about #navigation li.about,\n body.work #navigation li.work,\n body.products #navigation li.products,\n body.contact #navigation li.contact{\n ... whatever styles you need to show the tab selected ...\n } \n\nSo all you need to do when you create a new page in your site is to apply a class to the body tag to say which section it\u2019s in. The CSS will do the rest for you \u2013 without any server-side help.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-10T00:00:00+00:00", "url": "https://24ways.org/2005/auto-selecting-navigation/", "topic": "code"} {"rowid": 317, "title": "Putting the World into \"World Wide Web\"", "contents": "Despite the fact that the Web has been international in scope from its inception, the predominant mass of Web sites are written in English or another left-to-right language. Sites are typically designed visually for Western culture, and rely on an enormous body of practices for usability, information architecture and interaction design that are by and large centric to the Western world.\n\nThere are certainly many reasons this is true, but as more and more Web sites realize the benefits of bringing their products and services to diverse, global markets, the more demand there will be on Web designers and developers to understand how to put the World into World Wide Web.\n\nInternationalization\n\nAccording to the W3C, Internationalization is:\n\n\n\t\u201c\u2026the design and development of a product, application or document content that enables easy localization for target audiences that vary in culture, region, or language.\u201d\n\n\nMany Web designers and developers have at least heard, if not read, about Internationalization. We understand that the Web is in fact worldwide, but many of us never have the opportunity to work with Internationalization. Or, when we do, think of it in purely technical terms, such as \u201cwhich character set do I use?\u201d\n\nAt first glance, it might seem to many that Internationalization is the act of making Web sites available to international audiences. And while that is in fact true, this isn\u2019t done by broad-stroking techniques and technologies. Instead, it involves a far more narrow understanding of geographical, cultural and linguistic differences in specific areas of the world. This is referred to as localization and is the act of making a Web site make sense in the context of the region, culture and language(s) the people using the site are most familiar with.\n\nInternationalization itself includes the following technical tasks:\n\n\n\tEnsuring no barrier exists to the localization of sites. Of critical importance in the planning stages of a site for Internationalized audiences, the role of the developer is to ensure that no barrier exists. This means being able to perform such tasks as enabling Unicode and making sure legacy character encodings are properly handled.\n\tPreparing markup and CSS with Internationalization in mind. The earlier in the site development process this occurs, the better. Issues such as ensuring that you can support bidirectional text, identifying language, and using CSS to support non-Latin typographic features.\n\tEnabling code to support local, regional, language or culturally related references. Examples in this category would include time/date formats, localization of calendars, numbering systems, sorting of lists and managing international forms of addresses.\n\tEmpowering the user. Sites must be architected so the user can easily choose or implement the localized alternative most appropriate to them.\n\n\nLocalization\n\nAccording to the W3C, Localization is the:\n\n\n\t\u2026adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market (a \u201clocale\u201d).\n\n\nSo here\u2019s where we get down to thinking about the more sociological and anthropological concerns. Some of the primary localization issues are:\n\n\n\tNumeric formats. Different languages and cultures use numbering systems unlike ours. So, any time we need to use numbers, such as in an ordered list, we have to have a means of representing the accurate numbering system for the locale in question.\n\tMoney, honey! That\u2019s right. I\u2019ve got a pocketful of ugly U.S. dollars (why is U.S. money so unimaginative?). But I also have a drawer full of Japanese Yen, Australian Dollars, and Great British Pounds. Currency, how it\u2019s calculated and how it\u2019s represented is always a consideration when dealing with localization.\n\tUsing symbols, icons and colors properly. Using certain symbols or icons on sites where they might offend or confuse is certainly not in the best interest of a site that wants to sell or promote a product, service or information type. Moreover, the colors we use are surprisingly persuasive \u2013 or detrimental. Think about colors that represent death, for example. In many parts of Asia, white is the color of death. In most of the Western world, black represents death. For Catholic Europe, shades of purple (especially lavender) have represented Christ on the cross and mourning since at least Victorian times. When Walt Disney World Europe launched an ad campaign using a lot of purple and very glitzy imagery, millions of dollars were lost as a result of this seeming subtle issue. Instead of experiencing joy and celebration at the ads, the European audience, particularly the French, found the marketing to be overly American, aggressive, depressing and basically unappealing. Along with this and other cultural blunders, Disney Europe has become a well-known case study for businesses wishing to become international. By failing to understand localization differences, and how powerful color and imagery act on the human psyche, designers and developers are put to more of a disadvantage when attempting to communicate with a given culture.\n\tChoosing appropriate references to objects and ideas. What seems perfectly natural in one culture in terms of visual objects and ideas can get confused in another environment. One of my favorite cases of this has to do with Gerber baby food. In the U.S., the baby food is marketed using a cute baby on the package. Most people in the U.S. culturally do not make an immediate association that what is being represented on the label is what is inside the container. However, when Gerber expanded to Africa, where many people don\u2019t read, and where visual associations are less abstract, people made the inference that a baby on the cover of a jar of food represented what is in fact in the jar. You can imagine how confused and even angry people became. Using such approaches as a marketing ploy in the wrong locale can and will render the marketing a failure.\n\n\nAs you can see, the act of localization is one that can have profound impact on the success of a business or organization as it seeks to become available to more and more people across the globe.\n\nRethinking Design in the Context of Culture\n\nWhile well-educated designers and those individuals working specifically for companies that do a lot of localization understand these nuances, most of us don\u2019t get exposed to these ideas. Yet, we begin to see how necessary it becomes to have an awareness of not just the technical aspects of Internationalization, but the socio-cultural ones within localization.\n\nWhat\u2019s more, the bulk of information we have when it comes to designing sites typically comes from studies and work done on sites built in English and promoted to Western culture at large. We\u2019re making a critical mistake by not including diverse languages and cultural issues within our usability and information architecture studies.\n\nConsider the following design from the BBC:\n\n\n\nIn this case, we\u2019re dealing with English, which is read left to right. We are also dealing with U.K. cultural norms. Notice the following:\n\n\n\tLocation of of navigation\n\tUse of the color red\n\tUse of diverse symbols\n\tMix of symbols, icons and photos\n\tLocation of Search\n\n\nNow look at this design, which is the Arabic version of the BBC News, read right to left, and dealing with cultural norms within the Arabic-speaking world.\n\n\n\nNotice the following:\n\n\n\tLocation of of navigation (location switches to the right)\n\tUse of the color blue (blue is considered the \u201csafest\u201d global color)\n\tNo use of symbols and icons whatsoever\n\tLimitation of imagery to photos\n\tIn most cases, the photos show people, not objects\n\tLocation of Search\n\n\nAdmittedly, some choices here are more obvious than others in terms of why they were made. But one thing that stands out is that the placement of search is the same for both versions. Is this the result of a specific localization decision, or based on what we believe about usability at large? This is exactly the kind of question that designers working on localization have to seek answers to, instead of relying on popular best practices and belief systems that exist for English-only Web sites.\n\nIt\u2019s a Wide World Web After All\n\nFrom this brief article on Internationalization, it becomes apparent that the art and science of creating sites for global audiences requires a lot more preparation and planning than one might think at first glance. Developers and designers not working to address these issues specifically due to time or awareness will do well to at least understand the basic process of making sites more culturally savvy, and better prepared for any future global expansion.\n\nOne thing is certain: We not only are on a dramatic learning curve for designing and developing Web sites as it is, the need to localize sites is going to become more and more a part of the day to day work. Understanding aspects of what makes a site international and local will not only help you expand your skill set and make you more marketable, but it will also expand your understanding of the world and the people within it, how they relate to and use the Web, and how you can help make their experience the best one possible.", "year": "2005", "author": "Molly Holzschlag", "author_slug": "mollyholzschlag", "published": "2005-12-09T00:00:00+00:00", "url": "https://24ways.org/2005/putting-the-world-into-world-wide-web/", "topic": "ux"} {"rowid": 316, "title": "Have Your DOM and Script It Too", "contents": "When working with the XMLHttpRequest object it appears you can only go one of three ways: \n\n\n\tYou can stay true to the colorful moniker du jour and stick strictly to the responseXML property\n\tYou can play with proprietary \u2013 yet widely supported \u2013 fire and inject the value of responseText property into the innerHTML of an element of your choosing\n\tOr you can be eval() and parse JSON or arbitrary JavaScript delivered via responseText\n\n\nBut did you know that there\u2019s a fourth option giving you the best of the latter two worlds? Mint uses this unmentioned approach to grab fresh HTML and run arbitrary JavaScript simultaneously. Without relying on eval(). \u201cBut wait-\u201d, you might say, \u201cwhen would I need to do this?\u201d Besides the example below this technique is handy for things like tab groups that need initialization onload but miss the main onload event handler by a mile thanks to asynchronous scripting.\n\nConsider the problem\n\nOriginally Mint used option 2 to refresh or load new tabs into individual Pepper panes without requiring a full roundtrip to the server. This was all well and good until I introduced the new Client Mode which when enabled allows anyone to view a Mint installation without being logged in. If voyeurs are afoot as Client Mode is disabled, the next time they refresh a pane the entire login page is inserted into the current document. That\u2019s not very helpful so I needed a way to redirect the current document to the login page.\n\n\n\nEnter the solution\n\nWouldn\u2019t it be cool if browsers interpreted the contents of script tags crammed into innerHTML? Sure, but unfortunately, that just wasn\u2019t meant to be. However like the body element, image elements have an onload event handler. When the image has fully loaded the handler runs the code applied to it. See where I\u2019m going with this?\n\nBy tacking a tiny image (think single pixel, transparent spacer gif \u2013 shudder) onto the end of the HTML returned by our Ajax call, we can smuggle our arbitrary JavaScript into the existing document. The image is added to the DOM, and our stowaway can go to town.\n\n

    This is the results of our Ajax call.

    \n \"\"\n\nPlease be neat\n\nSo we\u2019ve just jammed some meaningless cruft into our DOM. If our script does anything with images this addition could have some unexpected side effects. (Remember The Fly?) So in order to save that poor, unsuspecting element whose innerHTML we just swapped out from sharing Jeff Goldblum\u2019s terrible fate we should tidy up after ourselves. And by using the removeChild method we do just that.\n\n

    This is the results of our Ajax call.

    \n \"\"", "year": "2005", "author": "Shaun Inman", "author_slug": "shauninman", "published": "2005-12-24T00:00:00+00:00", "url": "https://24ways.org/2005/have-your-dom-and-script-it-too/", "topic": "code"} {"rowid": 315, "title": "Edit-in-Place with Ajax", "contents": "Back on day one we looked at using the Prototype library to take all the hard work out of making a simple Ajax call. While that was fun and all, it didn\u2019t go that far towards implementing something really practical. We dipped our toes in, but haven\u2019t learned to swim yet.\n\nSo here is swimming lesson number one. Anyone who\u2019s used Flickr to publish their photos will be familiar with the edit-in-place system used for quickly amending titles and descriptions on photographs. Hovering over an item turns its background yellow to indicate it is editable. A simple click loads the text into an edit box, right there on the page.\n\n\n\nPrototype includes all sorts of useful methods to help reproduce something like this for our own projects. As well as the simple Ajax GETs we learned how to do last time, we can also do POSTs (which we\u2019ll need here) and a whole bunch of manipulations to the user interface \u2013 all through simple library calls. Here\u2019s what we\u2019re building, so let\u2019s do it.\n\nGetting Started\n\nThere are two major components to this process; the user interface manipulation and the Ajax call itself. Our set-up is much the same as last time (you may wish to read the first article if you\u2019ve not already done so). We have a basic HTML page which links in the prototype.js file and our own editinplace.js. Here\u2019s what Santa dropped down my chimney: \n\n\n \n \n \n Edit-in-Place with Ajax\n \n \n \n \n\n

    Edit-in-place

    \n

    Dashing through the snow on a one horse open sleigh.

    \n \n \n\nSo that\u2019s our page. The editable item is going to be the

    called desc. The process goes something like this:\n\n\n\tHighlight the area onMouseOver\n\tClear the highlight onMouseOut\n\tIf the user clicks, hide the area and replace with a ';\n\n var button = ' OR \n ';\n\n new Insertion.After(obj, textarea+button);\n\n Event.observe(obj.id+'_save', 'click', function(){saveChanges(obj)}, false);\n Event.observe(obj.id+'_cancel', 'click', function(){cleanUp(obj)}, false);\n\n }\n\nThe first thing to do is to hide the object. Prototype comes to the rescue with Element.hide() (and of course, Element.show() too). Following that, we build up the textarea and buttons as a string, and then use Insertion.After() to place our new editor underneath the (now hidden) editable object.\n\nThe last thing to do before we leave the user to edit is it attach listeners to the Save and Cancel buttons to call either the saveChanges() function, or to cleanUp() after a cancel.\n\nIn the event of a cancel, we can clean up behind ourselves like so:\n\nfunction cleanUp(obj, keepEditable){\n Element.remove(obj.id+'_editor');\n Element.show(obj);\n if (!keepEditable) showAsEditable(obj, true);\n }\n\nSaving the Changes\n\nThis is where all the Ajax fun occurs. Whilst the previous article introduced Ajax.Updater() for simple Ajax calls, in this case we need a little bit more control over what happens once the response is received. For this purpose, Ajax.Request() is perfect. We can use the onSuccess and onFailure parameters to register functions to handle the response.\n\nfunction saveChanges(obj){\n var new_content = escape($F(obj.id+'_edit'));\n\n obj.innerHTML = \"Saving...\";\n cleanUp(obj, true);\n\n var success = function(t){editComplete(t, obj);}\n var failure = function(t){editFailed(t, obj);}\n\n var url = 'edit.php';\n var pars = 'id=' + obj.id + '&content=' + new_content;\n var myAjax = new Ajax.Request(url, {method:'post',\n postBody:pars, onSuccess:success, onFailure:failure});\n }\n\n function editComplete(t, obj){\n obj.innerHTML = t.responseText;\n showAsEditable(obj, true);\n }\n\n function editFailed(t, obj){\n obj.innerHTML = 'Sorry, the update failed.';\n cleanUp(obj);\n }\n\nAs you can see, we first grab in the contents of the textarea into the variable new_content. We then remove the editor, set the content of the original object to \u201cSaving\u2026\u201d to show that an update is occurring, and make the Ajax POST.\n\nIf the Ajax fails, editFailed() sets the contents of the object to \u201cSorry, the update failed.\u201d Admittedly, that\u2019s not a very helpful way to handle the error but I have to limit the scope of this article somewhere. It might be a good idea to stow away the original contents of the object (obj.preUpdate = obj.innerHTML) for later retrieval before setting the content to \u201cSaving\u2026\u201d. No one likes a failure \u2013 especially a messy one.\n\nIf the Ajax call is successful, the server-side script returns the edited content, which we then place back inside the object from editComplete, and tidy up.\n\nMeanwhile, back at the server\n\nThe missing piece of the puzzle is the server-side script for committing the changes to your database. Obviously, any solution I provide here is not going to fit your particular application. For the purposes of getting a functional demo going, here\u2019s what I have in PHP.\n\n\n\nNot exactly rocket science is it? I\u2019m just catching the content item from the POST and echoing it back. For your application to be useful, however, you\u2019ll need to know exactly which record you should be updating. I\u2019m passing in the ID of my

    , which is not a fat lot of use. You can modify saveChanges() to post back whatever information your app needs to know in order to process the update.\n\nYou should also check the user\u2019s credentials to make sure they have permission to edit whatever it is they\u2019re editing. Basically the same rules apply as with any script in your application.\n\nLimitations\n\nThere are a few bits and bobs that in an ideal world I would tidy up. The first is the error handling, as I\u2019ve already mentioned. The second is that from an idealistic standpoint, I\u2019d rather not be using innerHTML. However, the reality is that it\u2019s presently the most efficient way of making large changes to the document. If you\u2019re serving as XML, remember that you\u2019ll need to replace these with proper DOM nodes.\n\nIt\u2019s also important to note that it\u2019s quite difficult to make something like this universally accessible. Whenever you start updating large chunks of a document based on user interaction, a lot of non-traditional devices don\u2019t cope well. The benefit of this technique, though, is that if JavaScript is unavailable none of the functionality gets implemented at all \u2013 it fails silently. It is for this reason that this shouldn\u2019t be used as a complete replacement for a traditional, universally accessible edit form. It\u2019s a great time-saver for those with the ability to use it, but it\u2019s no replacement.\n\nSee it in action\n\nI\u2019ve put together an example page using the inert PHP script above. That is to say, your edits aren\u2019t committed to a database, so the example is reset when the page is reloaded.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-23T00:00:00+00:00", "url": "https://24ways.org/2005/edit-in-place-with-ajax/", "topic": "code"} {"rowid": 314, "title": "Easy Ajax with Prototype", "contents": "There\u2019s little more impressive on the web today than a appropriate touch of Ajax. Used well, Ajax brings a web interface much closer to the experience of a desktop app, and can turn a bear of an task into a pleasurable activity.\n\nBut it\u2019s really hard, right? It involves all the nasty JavaScript that no one ever does often enough to get really good at, and the browser support is patchy, and urgh it\u2019s just so much damn effort. Well, the good news is that \u2013 ta-da \u2013 it doesn\u2019t have to be a headache. But man does it still look impressive. Here\u2019s how to amaze your friends.\n\nIntroducing prototype.js\n\nPrototype is a JavaScript framework by Sam Stephenson designed to help make developing dynamic web apps a whole lot easier. In basic terms, it\u2019s a JavaScript file which you link into your page that then enables you to do cool stuff.\n\nThere\u2019s loads of capability built in, a portion of which covers our beloved Ajax. The whole thing is freely distributable under an MIT-style license, so it\u2019s good to go. What a nice man that Mr Stephenson is \u2013 friends, let us raise a hearty cup of mulled wine to his good name. Cheers! sluurrrrp.\n\nFirst step is to download the latest Prototype and put it somewhere safe. I suggest underneath the Christmas tree.\n\nCutting to the chase\n\nBefore I go on and set up an example of how to use this, let\u2019s just get to the crux. Here\u2019s how Prototype enables you to make a simple Ajax call and dump the results back to the page:\n\nvar url = 'myscript.php';\nvar pars = 'foo=bar';\nvar target = 'output-div';\t\nvar myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars});\n\nThis snippet of JavaScript does a GET to myscript.php, with the parameter foo=bar, and when a result is returned, it places it inside the element with the ID output-div on your page.\n\nKnocking up a basic example\n\nSo to get this show on the road, there are three files we need to set up in our site alongside prototype.js. Obviously we need a basic HTML page with prototype.js linked in. This is the page the user interacts with. Secondly, we need our own JavaScript file for the glue between the interface and the stuff Prototype is doing. Lastly, we need the page (a PHP script in my case) that the Ajax is going to make its call too.\n\nSo, to that basic HTML page for the user to interact with. Here\u2019s one I found whilst out carol singing:\n\n\n\n\n \n Easy Ajax\n \n \n \n\n
    \n
    \n \n \n \n
    \n
    \n \n\n\n\nAs you can see, I\u2019ve linked in prototype.js, and also a file called ajax.js, which is where we\u2019ll be putting our glue. (Careful where you leave your glue, kids.)\n\nOur basic example is just going to take a name and then echo it back in the form of a seasonal greeting. There\u2019s a form with an input field for a name, and crucially a DIV (greeting) for the result of our call. You\u2019ll also notice that the form has a submit button \u2013 this is so that it can function as a regular form when no JavaScript is available. It\u2019s important not to get carried away and forget the basics of accessibility.\n\nMeanwhile, back at the server\n\nSo we need a script at the server which is going to take input from the Ajax call and return some output. This is normally where you\u2019d hook into a database and do whatever transaction you need to before returning a result. To keep this as simple as possible, all this example here will do is take the name the user has given and add it to a greeting message. Not exactly Web 2-point-HoHoHo, but there you have it.\n\nHere\u2019s a quick PHP script \u2013 greeting.php \u2013 that Santa brought me early.\n\nSeason's Greetings, $the_name!

    \";\n?>\n\nYou\u2019ll perhaps want to do something a little more complex within your own projects. Just sayin\u2019.\n\nGluing it all together\n\nInside our ajax.js file, we need to hook this all together. We\u2019re going to take advantage of some of the handy listener routines and such that Prototype also makes available. The first task is to attach a listener to set the scene once the window has loaded. He\u2019s how we attach an onload event to the window object and get it to call a function named init():\n\nEvent.observe(window, 'load', init, false);\n\nNow we create our init() function to do our evil bidding. Its first job of the day is to hide the submit button for those with JavaScript enabled. After that, it attaches a listener to watch for the user typing in the name field.\n\nfunction init(){\n $('greeting-submit').style.display = 'none';\n Event.observe('greeting-name', 'keyup', greet, false);\n}\n\nAs you can see, this is going to make a call to a function called greet() onkeyup in the greeting-name field. That function looks like this:\n\nfunction greet(){\n var url = 'greeting.php';\n var pars = 'greeting-name='+escape($F('greeting-name'));\n var target = 'greeting';\n var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars});\n}\n\nThe key points to note here are that any user input needs to be escaped before putting into the parameters so that it\u2019s URL-ready. The target is the ID of the element on the page (a DIV in our case) which will be the recipient of the output from the Ajax call.\n\nThat\u2019s it\n\nNo, seriously. That\u2019s everything. Try the example. Amaze your friends with your 1337 Ajax sk1llz.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-01T00:00:00+00:00", "url": "https://24ways.org/2005/easy-ajax-with-prototype/", "topic": "code"} {"rowid": 313, "title": "Centered Tabs with CSS", "contents": "Doug Bowman\u2019s Sliding Doors is pretty much the de facto way to build tabbed navigation with CSS, and rightfully so \u2013 it is, as they say, rockin\u2019 like Dokken. But since it relies heavily on floats for the positioning of its tabs, we\u2019re constrained to either left- or right-hand navigation. But what if we need a bit more flexibility? What if we need to place our navigation in the center?\n\nStyling the li as a floated block does give us a great deal of control over margin, padding, and other presentational styles. However, we should learn to love the inline box \u2013 with it, we can create a flexible, centered alternative to floated navigation lists.\n\nHumble Beginnings\n\nDo an extra shot of \u2018nog, because you know what\u2019s coming next. That\u2019s right, a simple unordered list:\n\n\n\nIf we were wedded to using floats to style our list, we could easily fix the width of our ul, and trick it out with some margin: 0 auto; love to center it accordingly. But this wouldn\u2019t net us much flexibility: if we ever changed the number of navigation items, or if the user increased her browser\u2019s font size, our design could easily break.\n\nInstead of worrying about floats, let\u2019s take the most basic approach possible: let\u2019s turn our list items into inline elements, and simply use text-align to center them within the ul:\n\n#navigation ul, #navigation ul li {\n list-style: none;\n margin: 0;\n padding: 0;\n}\n\n#navigation ul {\n text-align: center;\n}\n\n#navigation ul li {\n display: inline;\n margin-right: .75em;\n}\n\n#navigation ul li.last {\n margin-right: 0;\n}\n\nOur first step is sexy, no? Well, okay, not really \u2013 but it gives us a good starting point. We\u2019ve tamed our list by removing its default styles, set the list items to display: inline, and centered the lot. Adding a background color to the links shows us exactly how the different elements are positioned.\n\nNow the fun stuff.\n\nInline Elements, Padding, and You\n\nSo how do we give our links some dimensions? Well, as the CSS specification tells us, the height property isn\u2019t an option for inline elements such as our anchors. However, what if we add some padding to them?\n\n#navigation li a {\n padding: 5px 1em;\n}\n\nI just love leading questions. Things are looking good, but something\u2019s amiss: as you can see, the padded anchors seem to be escaping their containing list.\n\nThankfully, it\u2019s easy to get things back in line. Our anchors have 5 pixels of padding on their top and bottom edges, right? Well, by applying the same vertical padding to the list, our list will finally \u201ccontain\u201d its child elements once again.\n\n\u2019Tis the Season for Tabbing\n\nNow, we\u2019re finally able to follow the \u201cSliding Doors\u201d model, and tack on some graphics:\n\n#navigation ul li a {\n background: url(\"tab-right.gif\") no-repeat 100% 0;\n color: #06C;\n padding: 5px 0;\n text-decoration: none;\n}\n\n#navigation ul li a span {\n background: url(\"tab-left.gif\") no-repeat;\n padding: 5px 1em;\n}\n\n#navigation ul li a:hover span {\n color: #69C;\n text-decoration: underline;\n}\n\nFinally, our navigation\u2019s looking appropriately sexy. By placing an equal amount of padding on the top and bottom of the ul, our tabs are properly \u201ccontained\u201d, and we can subsequently style the links within them.\n\n\n\nBut what if we want them to bleed over the bottom-most border? Easy: we can simply decrease the bottom padding on the list by one pixel, like so.\n\nA Special Note for Special Browsers\n\nThe Mac IE5 users in the audience are likely hopping up and down by now: as they\u2019ve discovered, our centered navigation behaves rather annoyingly in their browser. As Philippe Wittenbergh has reported, Mac IE5 is known to create \u201cphantom links\u201d in a block-level element when text-align is set to any value but the default value of left. Thankfully, Philippe has documented a workaround that gets that [censored] venerable browser to behave. Simply place the following code into your CSS, and the links will be restored to their appropriate width:\n\n/**//*/\n#navigation ul li a {\n display: inline-block;\n white-space: nowrap;\n width: 1px;\n}\n/**/\n\nIE for Windows, however, displays an extra kind of crazy. The padding I\u2019ve placed on my anchors is offsetting the spans that contain the left curve of my tabs; thankfully, these shenanigans are easily straightened out:\n\n/**/\n* html #navigation ul li a {\n padding: 0;\n}\n/**/\n\nAnd with that, we\u2019re finally finished.\n\nAll set.\n\nAnd that\u2019s it. With your centered navigation in hand, you can finally enjoy those holiday toddies and uncomfortable conversations with your skeevy Uncle Eustace.", "year": "2005", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2005-12-08T00:00:00+00:00", "url": "https://24ways.org/2005/centered-tabs-with-css/", "topic": "code"} {"rowid": 312, "title": "Preparing to Be Badass Next Year", "contents": "Once we\u2019ve eaten our way through the holiday season, people will start to think about new year\u2019s resolutions. We tend to focus on things that we want to change\u2026 and often things that we don\u2019t like about ourselves to \u201cfix\u201d. We set rules for ourselves, or try to start new habits or stop bad ones. We focus in on things we will or won\u2019t do. \nFor many of us the list of things we \u201cought\u201d to be spending time on is just plain overwhelming \u2013 family, charity/community, career, money, health, relationships, personal development. \nIt\u2019s kinda scary even just listing it out, isn\u2019t it? I want to encourage you to think differently about next year.\nThe ever-brilliant Kathy Sierra articulates a better approach really well when talking about the attitude we should have to building great products. She tells us to think not about what the user will do with our product, but about what they are trying to achieve in the real world and how our product helps them to be badass1.\nWhen we help the user be badass, then we are really making a difference. \nI suppose this is one way of saying: focus not on what you will do, focus on what it will help you achieve. How will it help you be awesome?\nIn what ways do you want to be more badass next year?\nA professional lens\nThough of course you might want to focus in on health or family or charity or community or another area next year, many people will want to become more badass in their chosen career. \nSo let\u2019s talk about a scaffold to help you figure out your professional / career development next year. \nFirst up, an assumption: everyone wants to be awesome. Nobody gets up in the morning aiming to be crap at their job. Nobody thinks to themselves \u201cToday I am aiming for just south of mediocre, and if I can mess up everybody else\u2019s ability to do good work then that will be just perfect2\u201d. \nErgo, you want to be awesome. So what does awesome look like? \nDanger!\nThe big trap that people fall into when think about their professional development is to immediately focus on the things that they aren\u2019t good at. When you ask people \u201cwhat do you want to work on getting better at next year?\u201d they frequently gravitate to the things that they believe they are bad at. \nWhy is this a trap? Because if you focus all your time and energy on improving the areas that you suck at, you are going to end up middling at everything. Going from bad \u2192 mediocre at a given skill / behaviour takes a bunch of time and energy. So if you spend all your time going from bad \u2192 mediocre at things, what do you think you end up? That\u2019s right, mediocre. \nMediocrity is not a great career goal, kids. \nWhat do you already rock at?\nThe much better investment of time and energy is to go from good \u2192 awesome. It often takes the same amount of relative time and energy, but wow the end result is better! So first, ask yourself and those who know you well what you are already pretty damn good at. Combat imposter syndrome by asking others. \nThen figure out how to double down on those things. What does brilliant look like for a given skill? What\u2019s the knowledge or practice that you need to level yourself up even further in that thing?\nBut what if I really really suck?\nAdmittedly, sometimes something you suck at really is holding you back. But it\u2019s important to separate out weaknesses (just something you suck at) from controlling weaknesses (something you suck at that actually matters for your chosen career). \nIf skill x is just not an important thing for you to be good at, you may never need to care that you aren\u2019t good at it. If your current role or the one you aspire to next really really requires you to be great at x, then it\u2019s worth investing your time and energy (and possibly money too) getting better at it.\nSo when you look at the things that you aren\u2019t good at, which of those are actually essential for success?\nThe right ratio\nA good rule of thumb is to pick three things you are already good at to work on becoming awesome at and limit yourself to one weakness that you are trying to improve on. That way you are making sure that you get to awesome in areas where you already have an advantage, and limit the amount of time you are spending on going from bad \u2192 mediocre. \nLevelling up learning\nSo once you\u2019ve figured out your areas you want to focus on next year, what do you actually decide to do? \nMost of all, you should try to design your day-to-day work in a way that it is also an effective learning experience. This means making sure you have a good feedback loop \u2013 you get to try something, see if it works, learn from it, rinse and repeat. \nIt\u2019s also about balance: you want to be challenged enough for work to be interesting, without it being so hard it\u2019s frustrating. You want to do similar / the same things often enough that you get to learn and improve, without it being so repetitive that it\u2019s boring. \nContinuously getting better at things you are already good at is actually both easier and harder than it sounds. The advantage is that it\u2019s pretty easy to add the feedback loop to make sure that you are improving; the disadvantage is that you\u2019re already good at these skills so you could easily just \u201cdo\u201d without ever stopping to reflect and improve. Build in time for personal retrospectives (\u201cWhat went well? What didn\u2019t? What one thing will I choose to change next time?\u201d) and find a way of getting feedback from outside sources as well. \nAs for the new skills, it\u2019s worth knowing that skill development follows a particular pattern:\n\nWe all start out unconsciously incompetent (we don\u2019t know what to do and if we tried we\u2019d unwittingly get it wrong), progress on to conscious incompetence (we now know we\u2019re doing it wrong) then conscious competence (we\u2019re doing it right but wow it takes effort and attention) and eventually get to unconscious competence (automatically getting it right). \nYour past experiences and knowledge might let you move faster through these stages, but no one gets to skip them. Invest the time and remember you need the feedback loop to really improve. \nWhat about keeping up?\nEverything changes very fast in our industry. We need to invest in not falling behind, in keeping on top of what great looks like. There are a bunch of ways to do this, from reading blog posts, following links on Twitter, reading books to attending conferences or workshops, or just finding time to build things in new ways or with new technologies. \nWhich will work best for you depends on how you best learn. Do you prefer to swallow a book? Do you learn most by building or experimenting? \nWhatever your learning style though, remember that there are three real needs:\n\nScan the landscape (what\u2019s changing, does it matter)\nGain the knowledge or skills (get the detail)\nApply the knowledge or skills (use it in reality)\n\nWhen you remember that you need all three of these things it can help you get more of what you do. \nFor me personally, I use a combination of conferences and blogs / Twitter to scan the landscape. Half of what I want out of a conference is just a list of things to have on my radar that might become important. I then pick a couple of things to go read up on more (I personally learn most effectively by swallowing a book or spec or similar). And then I pick one thing at a time to actually apply in real life, to embed the skill / knowledge. \nIn summary\n\nAim to be awesome (mediocrity is not a career goal).\nFigure out what you already rock at.\nOnly care about stuff you suck at that matters for your career.\nPick three things to go from good \u2192 awesome and one thing to go from bad \u2192 mediocre (or mediocre \u2192 good) this year.\nDesign learning into your daily work.\nScan the landscape, learn new stuff, apply it for real. \nBe badass!\n\n\n\n\n\nShe wrote a whole book about it. You should read it: Badass: Making Users Awesome\u00a0\u21a9\n\n\nBefore you argue too vehemently: I suppose some antisocial sociopathic bastards do exist. Identify them, and then RUN AWAY FAST AS YOU CAN #realtalk\u00a0\u21a9", "year": "2016", "author": "Meri Williams", "author_slug": "meriwilliams", "published": "2016-12-22T00:00:00+00:00", "url": "https://24ways.org/2016/preparing-to-be-badass-next-year/", "topic": "business"} {"rowid": 311, "title": "Designing Imaginative Style Guides", "contents": "(Living) style guides and (atomic) patterns libraries are \u201call the rage,\u201d as my dear old Nana would\u2019ve said. If articles and conference talks are to be believed, making and using them has become incredibly popular. I think there are plenty of ways we can improve how style guides look and make them better at communicating design information to creatives without it getting in the way of information that technical people need.\nGuides to libraries of patterns\nMost of my consulting work and a good deal of my creative projects now involve designing style guides. I\u2019ve amassed a huge collection of brand guidelines and identity manuals as well as, more recently, guides to libraries of patterns intended to help designers and developers make digital products and websites.\nTwo pages from one of my Purposeful style guide packs. Designs \u00a9 Stuff & Nonsense.\n\u201cStyle guide\u201d is an umbrella term for several types of design documentation. Sometimes we\u2019re referring to static style or visual identity guides, other times voice and tone. We might mean front-end code guidelines or component/pattern libraries. These all offer something different but more often than not they have something in common. They look ugly enough to have been designed by someone who enjoys configuring a router.\nOK, that was mean, not everyone\u2019s going to think an unimaginative style guide design is a problem. After all, as long as a style guide contains information people need, how it looks shouldn\u2019t matter, should it?\nInspiring not encyclopaedic\nWell here\u2019s the thing. Not everyone needs to take the same information away from a style guide. If you\u2019re looking for markup and styles to code a \u2018media\u2019 component, you\u2019re probably going to be the technical type, whereas if you need to understand the balance of sizes across a typographic hierarchy, you\u2019re more likely to be a creative. What you need from a style guide is different.\nSure, some people1 need rules:\n\n\u201cDo this (responsive pattern)\u201d or \u201cdon\u2019t do that (auto-playing video.)\u201d \n\nThose people probably also want facts:\n\n\u201cUse this (hexadecimal value)\u201d and not that inaccessible colour combination.\u201d\n\nStyle guides need to do more than list facts and rules. They should demonstrate a design, not just document its parts. The best style guides are inspiring not encyclopaedic. I\u2019ll explain by showing how many style guides currently present information about colour. \nColours communicate\nI\u2019m sure you\u2019ll agree that alongside typography, colour\u2019s one of the most important ingredients in a design. Colour communicates personality, creates mood and is vital to an easily understandable interactive vocabulary. So you\u2019d think that an average style guide would describe all this in any number of imaginative ways. Well, you\u2019d be disappointed, because the most inspiring you\u2019ll find looks like a collection of chips from a paint chart.\n\nLonely Planet\u2019s Rizzo does a great job of separating its Design Elements from UI Components, and while its \u2018Click to copy\u2019\u00a0colour values are a thoughtful touch, you\u2019ll struggle to get a feeling for Lonely Planet\u2019s design by looking at their colour chips.\nLonely Planet\u2019s Rizzo style guide.\nLonely Planet approach is a common way to display colour information and it\u2019s one that you\u2019ll also find at Greenpeace, Sky, The Times and on countless more style guides.\n\n\n\n\n\nGreenpeace, Sky and The Times style guides.\nGOV.UK\u2014not a website known for its creative flair\u2014varies this approach by using circles, which I find strange as circles don\u2019t feature anywhere else in its branding or design. On the plus side though, their designers have provided some context by categorising colours by usage such as text, links, backgrounds and more.\nGOV.UK style guide.\nGoogle\u2019s Material Design offers an embarrassment of colours but most helpfully it also advises how to combine its primary and accent colours into usable palettes.\nGoogle\u2019s Material Design.\nWhile the ability to copy colour values from a reference might be all a technical person needs, designers need to understand why particular colours were chosen as well as how to use them. \nInspiration not documentation\nFew style guides offer any explanation and even less by way of inspiring examples. Most are extremely vague when they describe colour:\n\n\u201cUse colour as a presentation element for either decorative purposes or to convey information.\u201d\n\nThe Government of Canada\u2019s Web Experience Toolkit states, rather obviously.\n\n\u201cCertain colors have inherent meaning for a large majority of users, although we recognize that cultural differences are plentiful.\u201d\n\nSalesforce tell us, without actually mentioning any of those plentiful differences. \nI\u2019m also unsure what makes the Draft U.S. Web Design Standards colours a \u201cdistinctly American palette\u201d but it will have to work extremely hard to achieve its goal of communicating \u201cwarmth and trustworthiness\u201d now. \nIn Canada, \u201cbold and vibrant\u201d colours reflect Alberta\u2019s \u201cdiverse landscape.\u201d \nAdding more colours to their palette has made Adobe \u201crich, dynamic, and multi-dimensional\u201d and at Skype, colours are \u201cbold, colourful (obviously) and confident\u201d although their style guide doesn\u2019t actually provide information on how to use them.\nThe University of Oxford, on the other hand, is much more helpful by explaining how and why to use their colours:\n\n\u201cThe (dark) Oxford blue is used primarily in general page furniture such as the backgrounds on the header and footer. This makes for a strong brand presence throughout the site. Because it features so strongly in these areas, it is not recommended to use it in large areas elsewhere. However it is used more sparingly in smaller elements such as in event date icons and search/filtering bars.\u201d\n\nOpenTable style guide.\nThe designers at OpenTable have cleverly considered how to explain the hierarchy of their brand colours by presenting them and their supporting colours in various size chips. It\u2019s also obvious from OpenTable\u2019s design which colours are primary, supporting, accent or neutral without them having to say so.\nArt directing style guides\nFor the style guides I design for my clients, I go beyond simply documenting their colour palette and type styles and describe visually what these mean for them and their brand. I work to find distinctive ways to present colour to better represent the brand and also to inspire designers. \nFor example, on a recent project for SunLife, I described their palette of colours and how to use them across a series of art directed pages that reflect the lively personality of the SunLife brand. Information about HEX and RGB values, Sass variables and when to use their colours for branding, interaction and messaging is all there, but in a format that can appeal to both creative and technical people.\nSunLife style guide. Designs \u00a9 Stuff & Nonsense.\nPurposeful style guides\nIf you want to improve how you present colour information in your style guides, there\u2019s plenty you can do.\nFor a start, you needn\u2019t confine colour information to the palette page in your style guide. Find imaginative ways to display colour across several pages to show it in context with other parts of your design. Here are two CSS gradient filled \u2018cover\u2019 pages from my Purposeful style sheets.\nColour impacts other elements too, including typography, so make sure you include colour information on those pages, and vice-versa.\nPurposeful. Designs \u00a9 Stuff & Nonsense.\nA visual hierarchy can be easier to understand than labelling colours as \u2018primary,\u2019 \u2018supporting,\u2019 or \u2018accent,\u2019 so find creative ways to present that hierarchy. You might use panels of different sizes or arrange boxes on a modular grid to fill a page with colour.\nDon\u2019t limit yourself to rectangular colour chips, use circles or other shapes created using only CSS. If irregular shapes are a part of your brand, fill SVG silhouettes with CSS and then wrap text around them using CSS shapes. \nPurposeful. Designs \u00a9 Stuff & Nonsense.\nSumming up\nIn many ways I\u2019m as frustrated with style guide design as I am with the general state of design on the web. Style guides and pattern libraries needn\u2019t be dull in order to be functional. In fact, they\u2019re the perfect place for you to try out new ideas and technologies. There\u2019s nowhere better to experiment with new properties like CSS Grid than on your style guide.\nThe best style guide designs showcase new approaches and possibilities, and don\u2019t simply document the old ones. Be as creative with your style guide designs as you are with any public-facing part of your website.\n\nPurposeful are HTML and CSS style guides templates designed to help you develop creative style guides and pattern libraries for your business or clients. Save time while impressing your clients by using easily customisable HTML and CSS files that have been designed and coded to the highest standards. Twenty pages covering all common style guide components including colour, typography, buttons, form elements, and tables, plus popular pattern library components. Purposeful style guides will be available to buy online in January.\n\n\n\n\nBoring people\u00a0\u21a9", "year": "2016", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2016-12-13T00:00:00+00:00", "url": "https://24ways.org/2016/designing-imaginative-style-guides/", "topic": "design"} {"rowid": 310, "title": "Fairytale of new Promise", "contents": "There are only four good Christmas songs.\nI know, yeah, JavaScript or whatever. We\u2019ll get to that in a minute, I promise.\nFirst\u2014and I cannot stress this enough\u2014 there are four good Christmas songs. You\u2019re free to disagree with me here, of course, but please try to understand that you will be wrong.\nThey don\u2019t all have the most safe-for-work titles; I can\u2019t list all of them here, but if you choose to let your fingers do the walkin\u2019 to your nearest search engine, I will say that one was released by the band FEAR way back in 1982 and one was on Run the Jewels\u2019 self-titled debut album. The lyrics are a hell of a lot worse than the titles, so maybe wait until you get home from work before you queue them up. Wear headphones, if you\u2019ve got thin walls.\nFor my money, though, the two I can reference by name are the top of that small heap: Tom Waits\u2019 Christmas Card from a Hooker in Minneapolis, and The Pogues\u2019 Fairytale of New York. The former once held the honor of being the only good Christmas song\u2014about which which I was also unequivocally correct, right up until I changed my mind. It\u2019s not the song up for discussion today, but feel free to familiarize yourself just the same\u2014I\u2019ll wait.\nFairytale of New York\u2014the top of the list\u2014starts out by hinting at some pretty standard holiday fare; dreams and cheer and whatnot. Typical seasonal stuff, so long as you ignore that the story seems to be recounted as a drunken flashback in a jail cell. You can probably make a few guesses at the underlying spirit of the song based on that framing: following a lucky break, our bright-eyed protagonists move to New York in search of fame and fortune, only to quickly descend into bad decisions, name-calling, and vaguely festive chaos.\nThis song speaks to me on a couple of levels, not the least of which is as a retelling of my day-to-day interactions with JavaScript. Each day\u2019s melody might vary a little bit, granted, but the lyrics almost always follow a pretty clear arc toward \u201cPARENTAL ADVISORY: EXPLICIT CONTENT.\u201d You might have heard a similar tune yourself; it goes a little somethin\u2019 like setTimeout(function() { console.log( \"this should be happening last\" ); }, 1000); . Callbacks are calling callbacks calling callbacks and something is happening somewhere, as the JavaScript interpreter plods through our code start-to-finish, line-by-line, step-by-step. If we need to take actions based on the results of something that could take its sweet time resolving, well, we\u2019d better fiddle with the order of things to make sure those actions don\u2019t happen too soon.\n\u201cBut I can see a better time,\u201d as the song says, \u201cwhen all our dreams come true.\u201d So, with that Pogues brand of holiday spirit squarely in mind\u2014by which I mean that your humble narrator is almost certainly drunk, and may be incarcerated at the time of publication\u2014gather \u2019round for a story of hope, of hardships, of semi-asynchronous JavaScript programming, and ultimately: of Promise unfulfilled.\nThe Main Thread\nJavaScript is single-minded, in a manner of speaking. Anything we tell the JavaScript runtime to do goes into a single-file queue; you\u2019ll see it referred to as the \u201cmain thread,\u201d or \u201cUI thread.\u201d That thread can be shared by a number of critical browser processes, like rendering and re-rendering parts of the page, and user interactions ranging from the simple\u2014say, highlighting text\u2014to the more complex\u2014interacting with form elements.\nIf that sounds a little scary to you, well, that\u2019s because it is. The more complex our scripts, the more we\u2019re cramming into that single-file main thread, to be processed along with\u2014say\u2014some of our CSS animations. Too much JavaScript clogging up the main thread means a lot of user-facing performance jankiness. Getting away from that single thread is a big part of all the excitement around Web Workers, which allow us to offload entire scripts into their own dedicated background threads\u2014though not without limitations of their own. Outside of Web Workers, that everything-thread is the only game in town: scripts executed one thing at a time, functions calling functions calling functions, taking numbers and crowding up the same deli counter as a user\u2019s interactions\u2014which, in this already strained metaphor, would be ham, I guess?\nAsynchronous JavaScript\nNow, those queued actions may include asynchronous things. For example: AJAX callbacks, setTimeout/setInterval, and addEventListener won\u2019t block the main thread while we\u2019re waiting for a request to come back, a timer to tick away, or an event to trigger. Once those things do kick in, though, the actions they\u2019re meant to perform will get shuffled right back into that single-thread queue.\nThere are a couple of places you might have written asynchronously-fired JavaScript, even if you\u2019re not super familiar with the overarching concept: XMLHttpRequest\u2014\u201cAJAX,\u201d if ya nasty\u2014or just kicking off a function once a user triggers a click or mouseenter event. Event-driven development is writ a little larger, with the overall flow of the script dictated by events, both internal and external. Writing event-driven JavaScript applications is a step in the right direction for sure\u2014it won\u2019t cure what ails the main thread, but it does work with the medium in a reasonable way. Event-driven development allows us to manage our use of the main thread in a way that makes sense. If any of this rings a bell for you, the motivation for Promises should feel familiar.\nFor example, a custom init event might kick things off, and fire a create event that applies our classes and restructures our markup which, on completion, fires a bindEvents event to handle all the event listeners for user interaction. There might not sound like much difference between that and one big function that kicks off, manipulates the DOM, and binds our events line-by-line\u2014but in a script of sufficient size and complexity we\u2019re not only provided with a decoupled flow through the script, but obvious touchpoints for future updates and a predictable structure for ongoing maintenance. \nThis pattern falls apart a little where we were still creating, binding, and listening for events in the same top-to-bottom, one-item-at-a-time way\u2014we had to set a listener on a given object before the event fires, or nothing would happen:\n// Create the event:\nvar event = document.createEvent( \"Event\" );\n\n// Name the event:\nevent.initEvent( \"doTheStuff\", true, true );\n\n// Listen for the custom `doTheStuff` event on `window`:\nwindow.addEventListener( \"doTheStuff\", initializeEverything );\n\n// Fire the custom event\nwindow.dispatchEvent( event );\nThis example is a little contrived, and this stuff is a lot more manageable for sure with the addition of a framework, but that\u2019s the basic gist: create and name the event, add a listener for the event, and\u2014after setting our listener\u2014dispatch the event.\nEvents and callbacks aren\u2019t the only game in town for weaving our way in and out of the main thread, though\u2014at least, not anymore. \nPromises\nA Promise is, at the risk of sounding sentimental, pure potential\u2014an empty container into which a value eventually results. A Promise can exist in several states: \u201cpending,\u201d while the computation they contain is being performed or \u201cresolved\u201d once that computation is complete. Once resolved, a Promise is \u201cfulfilled\u201d if it gave us back something we expect, or \u201crejected\u201d if it didn\u2019t.\nThe Promise constructor accepts a callback with two arguments: resolve and reject. We perform an action\u2014asynchronous or otherwise\u2014within that callback. If everything in there has gone according to plan, we call resolve. If something has gone awry, we call reject\u2014with an error, conventionally. To illustrate, let\u2019s tack something together with a pretty decent chance of doing what we don\u2019t want: a promise meant only to give us the number 1, but has a chance of giving us back a 2. No reasonable person would ever do this, of course, but I wouldn\u2019t necessarily put it past me.\nvar promisedOne = new Promise( function( resolve, reject ) {\n var coinToss = Math.floor( Math.random() * 2 ) + 1;\n\n if( coinToss === 1 ) {\n resolve( coinToss );\n } else {\n reject( new Error( \"That ain\u2019t a one.\" ) );\n }\n});\nThere\u2019s nothing too surprising in there, after you boil it all down. It\u2019s a little return-y, with the exception that we\u2019re flagging results as \u201cas expected\u201d or \u201csomething went wrong.\u201d\nTapping into that Promise uses another new keyword: then\u2014and as someone who attempts to make sense of JavaScript by breaking it down to plain ol\u2019 human-language, I\u2019m a big fan of this syntax. then is tacked onto our Promise identifier, and does just what it says on the tin: once the Promise is resolved, then do one of two things, both supplied as callbacks: the first in the case of a fulfilled promise, and the second in the case of a rejected one. Those two callbacks will have, as arguments, the results we specified with resolve orreject, respectively. It sounds like a lot in prose, but in code it\u2019s a pretty simple pattern:\npromisedOne.then( function( result ) {\n console.log( result );\n}, function( error ) {\n console.error( error );\n});\nIf you\u2019ve spent any time working with AJAX\u2014jQuery-wise, in particular\u2014you\u2019ve seen something like this pattern before: a success callback and an error callback. The state of a promise, once fulfilled or rejected, cannot be changed\u2014any reference we make to promisedOne will have a single, fixed result.\nIt may not look like too much the way I\u2019m using it here, but it\u2019s powerful stuff\u2014a pattern for asynchronously resolving anything. I\u2019ve recently used Promises alongside a script that emulates Font Load Events, to apply webfonts asynchronously and avoid a potential performance hit. Font Face Observer allows us to, as the name implies, determine when the files referenced by our @font-face rules have finished loading. \nvar fontObserver = new FontFaceObserver( \"Fancy Font\" );\n\nfontObserver.check().then(function() {\n document.documentElement.className += \" fonts-loaded\";\n}, function( error ) {\n console.error( error );\n});\nfontObserver.check() gives us back a Promise, allowing us to chain on a then containing our callbacks for success and failure. We use the fulfilled callback to bolt a class onto the page once the font file has been fully transferred. We don\u2019t bother including an argument in the first function, since we don\u2019t care about the result itself so much as we care that the promise resolved without error\u2014we\u2019re not doing anything with the resolved value, just adding a class to the page. We do include the error argument, since we\u2019ll want to know what happened should something go wrong.\nNow, this isn\u2019t the tidiest syntax around\u2014at least to my eyes\u2014with those two functions just kinda floating in a then. Luckily there\u2019s an similar alternative syntax; one that I find a bit easier to parse at-a-glance:\nfontObserver.check()\n .then(function() {\n document.documentElement.className += \" fonts-loaded\";\n })\n .catch(function( error ) {\n console.log( error );\n });\nThe first callback inside then provides us with our success state, while the catch provides us with a single, explicit \u201csomething went wrong\u201d callback. The two syntaxes aren\u2019t completely identical in all situations, but for a simple case like this, I find it a little neater.\nThe Common Thread\nI guess I still owe you an explanation, huh. Not about the JavaScript-whatever; I think I\u2019ve explained that plenty. No, I mean Fairytale of New York, and why it\u2019s perched up there at the top of the four (4) song heap.\nFairytale is a sad song, ostensibly. If you follow the main thread\u2014start to finish, line-by-line, step by step\u2014 Fairytale is a sad song. And I can see you out there, visions of Die Hard dancing in your heads: \u201cbut is it a Christmas song?\u201d\nWell, for my money, nothing says \u201cholidays\u201d quite like unreliable narration.\nShane MacGowan, the song\u2019s author, has placed the first verse about \u201cChristmas Eve in the drunk tank\u201d as happening right after the \u201clucky one, came in eighteen-to-one\u201d\u2014not at the chronological end of the story. That means the song might not be mostly drunken flashback, but all of it a single, overarching flashback including a Christmas Eve in protective custody. It could be that the man and woman are, together, recounting times long past\u2014good times and bad times\u2014maybe not even in chronological order. Hell, the \u201cNYPD Choir\u201d mentioned in the chorus? There\u2019s no such thing.\nWe\u2019re not big Christmas folks, my family and I. But just the same, every year, the handful of us get together, and every year\u2014like clockwork\u2014there\u2019s a lull in conversation, there\u2019s a sharp exhale, and Ma says \u201cwe all made it.\u201d Not to a house, not to a dinner, but through another year, to another Christmas. At this point, without fail, someone starts telling a story\u2014and one begets another, and so on. Sometimes the stories are happy, sometimes they\u2019re sad, more often than not they\u2019re both. Some are about things we were lucky to walk away from, some are about a time when another one of us didn\u2019t.\nStart-to-finish, line-by-line, step-by-step, the main thread through the year doesn\u2019t change, and maybe there isn\u2019t a whole lot we can do to change it. But by carefully weaving our way in and out of that thread\u2014stories all out of sync and resolving one way or the other, with the results determined by questionably reliable narrators\u2014we can change the way we interact with it and, little by little, we can start making sense of it.", "year": "2016", "author": "Mat Marquis", "author_slug": "matmarquis", "published": "2016-12-19T00:00:00+00:00", "url": "https://24ways.org/2016/fairytale-of-new-promise/", "topic": "code"} {"rowid": 309, "title": "HTTP/2 Server Push and Service Workers: The Perfect Partnership", "contents": "Being a web developer today is exciting! The web has come a long way since its early days and there are so many great technologies that enable us to build faster, better experiences for our users. One of these technologies is HTTP/2 which has a killer feature known as HTTP/2 Server Push.\nDuring this year\u2019s Chrome Developer Summit, I watched a really informative talk by Sam Saccone, a Software Engineer on the Google Chrome team. He gave a talk entitled Planning for Performance, and one of the topics that he covered immediately piqued my interest; the idea that HTTP/2 Server Push and Service Workers were the perfect web performance combination.\n\nIf you\u2019ve never heard of HTTP/2 Server Push before, fear not - it\u2019s not as scary as it sounds. HTTP/2 Server Push simply allows the server to send data to the browser without having to wait for the browser to explicitly request it first. In this article, I am going to run through the basics of HTTP/2 Server Push and show you how, when combined with Service Workers, you can deliver the ultimate in web performance to your users.\nWhat is HTTP/2 Server Push?\nWhen a user navigates to a URL, a browser will make an HTTP request for the underlying web page. The browser will then scan the contents of the HTML document for any assets that it may need to retrieve such as CSS, JavaScript or images. Once it finds any assets that it needs, it will then make multiple HTTP requests for each resource that it needs and begin downloading one by one. While this approach works well, the problem is that each HTTP request means more round trips to the server before any data arrives at the browser. These extra round trips take time and can make your web pages load slower. \nBefore we go any further, let\u2019s see what this might look like when your browser makes a request for a web page. If you were to view this in the developer tools of your browser, it might look a little something like this:\n\nAs you can see from the image above, once the HTML file has been downloaded and parsed, the browser then makes HTTP requests for any assets that it needs. \nThis is where HTTP/2 Server Push comes in. The idea behind HTTP/2 Server Push is that when the browser requests a web page from the server, the server already knows about all the assets that are needed for the web page and \u201cpushes\u201d it to browser. This happens when the first HTTP request for the web page takes place and it eliminates an extra round trip, making your site faster. \nUsing the same example above, let\u2019s \u201cpush\u201d the JavaScript and CSS files instead of waiting for the browser to request them. The image below gives you an idea of what this might look like.\n\nWhoa, that looks different - let\u2019s break it down a little. Firstly, you can see that the JavaScript and CSS files appear earlier in the waterfall chart. You might also notice that the loading times for the files are extremely quick. The browser doesn\u2019t need to make an extra HTTP request to the server, instead it receives the critical files it needs all at once. Much better! \nThere are a number of different approaches when it comes to implementing HTTP/2 Server Push. Adoption is growing and many commercial CDNs such as Akamai and Cloudflare already offer support for Server Push. You can even roll your own implementation depending on your environment. I\u2019ve also previously blogged about building a basic HTTP/2 Server Push example using Node.js. In this post, I\u2019m not going to dive into how to implement HTTP/2 Server Push as that is an entire post in itself! However, I do recommend reading this article to find out more about the inner workings.\nHTTP/2 Server Push is awesome, but it isn\u2019t a magic bullet. It is fantastic for improving the load time of a web page when it first loads for a user, but it isn\u2019t that great when they request the same web page again. The reason for this is that HTTP/2 Server Push is not cache \u201caware\u201d. This means that the server isn\u2019t aware about the state of your client. If you\u2019ve visited a web page before, the server isn\u2019t aware of this and will push the resource again anyway, regardless of whether or not you need it. HTTP/2 Server Push effectively tells the browser that it knows better and that the browser should receive the resources whether it needs them or not. In theory browsers can cancel HTTP/2 Server Push requests if they\u2019re already got something in cache but unfortunately no browsers currently support it. The other issue is that the server will have already started to send some of the resource to the browser by the time the cancellation occurs.\nHTTP/2 Server Push & Service Workers\nSo where do Service Workers fit in? Believe it or not, when combined together HTTP/2 Server Push and Service Workers can be the perfect web performance partnership. If you\u2019ve not heard of Service Workers before, they are worker scripts that run in the background of your website. Simply put, they act as middleman between the client and the browser and enable you to intercept any network requests that come and go from the browser. They are packed with useful features such as caching, push notifications, and background sync. Best of all, they are written in JavaScript, making it easy for web developers to understand.\nUsing Service Workers, you can easily cache assets on a user\u2019s device. This means when a browser makes an HTTP request for an asset, the Service Worker is able to intercept the request and first check if the asset already exists in cache on the users device. If it does, then it can simply return and serve them directly from the device instead of ever hitting the server.\nLet\u2019s stop for a second and analyse what that means. Using HTTP/2 Server Push, you are able to push critical assets to the browser before the browser requests them. Then, using Service Workers you are able to cache these resources so that the browser never needs to make a request to the server again. That means a super fast first load and an even faster second load!\nLet\u2019s put this into action. The following HTML code is a basic web page that retrieves a few images and two JavaScript files.\n\n\n\n \n HTTP2 Push Demo\n\n\n

    HTTP2 Push

    \n \n \n
    \n
    \n \n \n \n \n \n \n\n\nIn the HTML code above, I am registering a Service Worker file named service-worker.js. In order to start caching assets, I am going to use the Service Worker toolbox . It is a lightweight helper library to help you get started creating your own Service Workers. Using this library, we can actually cache the base web page with the path /push.\nThe Service Worker Toolbox comes with a built-in routing system which is based on the same routing as Express. With just a few lines of code, you can start building powerful caching patterns.\nI\u2019ve add the following code to the service-worker.js file.\n(global => {\n 'use strict';\n\n // Load the sw-toolbox library.\n importScripts('/js/sw-toolbox/sw-toolbox.js');\n\n // The route for any requests\n toolbox.router.get('/push', global.toolbox.fastest);\n\n toolbox.router.get('/images/(.*)', global.toolbox.fastest);\n\n toolbox.router.get('/js/(.*)', global.toolbox.fastest);\n\n // Ensure that our service worker takes control of the page as soon as possible.\n global.addEventListener('install', event => event.waitUntil(global.skipWaiting()));\n global.addEventListener('activate', event => event.waitUntil(global.clients.claim()));\n})(self);\nLet\u2019s break this code down further. Around line 4, I am importing the Service Worker toolbox. Next, I am specifying a route that will listen to any requests that match the URL /push. Because I am also interested in caching the images and JavaScript for that page, I\u2019ve told the toolbox to listen to these routes too.\nThe best thing about the code above is that if any of the assets exist in cache, we will instantly return the cached version instead of waiting for it to download. If the asset doesn\u2019t exist in cache, the code above will add it into cache so that we can retrieve it when it\u2019s needed again.\nYou may also notice the code global.toolbox.fastest - this is important because gives you the compromise of fulfilling from the cache immediately, while firing off an additional HTTP request updating the cache for the next visit.\nBut what does this mean when combined with HTTP/2 Server Push? Well, it means that on the first load of the web page, you are able to \u201cpush\u201d everything to the user at once before the browser has even requested it. The Service Worker activates and starts caching the assets on the users device. The next time a user visits the page, the Service Worker will intercept the request and serve the asset directly from cache. Amazing! \nUsing this technique, the waterfall chart for a repeat visit should look like the image below.\n\nIf you look closely at the image above, you\u2019ll notice that the web page returns almost instantly without ever making an HTTP request over the network. Using the Service Worker library, we cached the base page for the route /push, which allowed us to retrieve this directly from cache.\nWhether used on their own or combined together, the best thing about these two features is that they are the perfect progressive enhancement. If your user\u2019s browser doesn\u2019t support them, they will simply fall back to HTTP/1.1 without Service Workers. Your users may not experience as fast a load time as they would with these two techniques, but it would be no different from their normal experience. HTTP/2 Server Push and Service Workers are really the perfect partners when it comes to web performance.\nSummary\nWhen used correctly, HTTP/2 Server Push and Service Workers can have a positive impact on your site\u2019s load times. Together they mean super fast first load times and even faster repeat views to a web page. Whilst this technique is really effective, it\u2019s worth noting that HTTP/2 push is not a magic bullet. Think about the situations where it might make sense to use it and don\u2019t just simply \u201cpush\u201d everything; it could actually lead to having slower page load times. If you\u2019d like to learn more about the rules of thumb for HTTP/2 Server Push, I recommend reading this article for more information. \nAll of the code in this example is available on my Github repo - if you have any questions, please submit an issue and I\u2019ll get back to you as soon as possible. \nIf you\u2019d like to learn more about this technique and others relating to HTTP/2, I highly recommend watching Sam Saccone\u2019s talk at this years Chrome Developer Summit. \nI\u2019d also like to say a massive thank you to Robin Osborne, Andy Davies and Jeffrey Posnick for helping me review this article before putting it live!", "year": "2016", "author": "Dean Hume", "author_slug": "deanhume", "published": "2016-12-15T00:00:00+00:00", "url": "https://24ways.org/2016/http2-server-push-and-service-workers/", "topic": "code"} {"rowid": 308, "title": "How to Make a Chrome Extension to Delight (or Troll) Your Friends", "contents": "If you\u2019re like me, you grew up drawing mustaches on celebrities. Every photograph was subject to your doodling wrath, and your brilliance was taken to a whole new level with computer programs like Microsoft Paint. The advent of digital cameras meant that no one was safe from your handiwork, especially not your friends. And when you finally got your hands on Photoshop, you spent hours maniacally giggling at your artistic genius. \nBut today is different. You\u2019re a serious adult with important things to do and a reputation to uphold. You keep up with modern web techniques and trends, and have little time for fun other than a random Giphy on Slack\u2026 right? \nNope. \nIf there\u2019s one thing 2016 has taught me, it\u2019s that we\u2014the self-serious, world-changing tech movers and shakers of the universe\u2014haven\u2019t changed one bit from our younger, more delightable selves.\nHow do I know? This year I created a Chrome extension called Tabby Cat and watched hundreds of thousands of people ditch productivity for randomly generated cats. Tabby Cat replaces your new tab page with an SVG cat featuring a silly name like \u201cStinky Dinosaur\u201d or \u201cTiny Potato\u201d. Over time, the cats collect goodies that vary in absurdity from fishbones to lawn flamingos to Raybans. Kids and adults alike use this extension, and analytics show the majority of use happens Monday through Friday from 9-5. The popularity of Tabby Cat has convinced me there\u2019s still plenty of room in our big, grown-up hearts for fun.\n\n\nToday, we\u2019re going to combine the formula behind Tabby Cat with your intrinsic desire to delight (or troll) your friends, and create a web app that generates your friends with random objects and environments of your choosing. You can publish it as a Chrome extension to replace your new tab, or simply host it as a website and point to it with the New Tab Redirect extension. \nHere\u2019s a sneak peek at my final result featuring my partner, my cat, and I in cheerfully weird accessories. Your result will look however you want it to.\n\nAlong the way, we\u2019ll cover how to build a Chrome extension that replaces the new tab page, and explore ways to program randomness into your work to create something truly delightful. \nWhat you\u2019ll need\n\nAdobe Illustrator (or a similar illustration program to export PNG)\nSome images of your friends\nA text editor\n\nNote: This can be as simple or as complex as you want it to be. Most of the application is pre-built so you can focus on kicking back and getting in touch with your creative side. If you want to dive in deeper, you\u2019ll find ways to do it.\nGetting started\n\nDownload a local copy of the boilerplate for today\u2019s tutorial here, and open it in a text editor. Inside, you\u2019ll find a simple web app that you can run in Chrome. \nOpen index.html in Chrome. You should see a grey page that says \u201cNoname\u201d.\nOpen template.pdf in Adobe Illustrator or a similar program that can export PNG. The file contains an artboard measuring 800px x 800px, with a dotted blue outline of a face. This is your template.\n\nNote: We\u2019re using Google Chrome to build and preview this application because the end-result is a Chrome extension. This means that the application isn\u2019t totally cross-browser compatible, but that\u2019s okay.\nStep 1: Gather your friends\nThe first thing to do is choose who your muses are. Since the holidays are upon us, I\u2019d suggest finding inspiration in your family.\nCreate your artwork\nFor each person, find an image where their face is pointed as forward as possible. Place the image onto the Artwork layer of the Illustrator file, and line up their face with the template. Then, rename the artboard something descriptive like face_bob. Here\u2019s my crew:\nAs you can see, my use of the word \u201cfamily\u201d extends to cats. There\u2019s no judgement here.\nNotice that some of my photos don\u2019t completely fill the artboard\u2013that\u2019s fine. The images will be clipped into ovals when they\u2019re rendered in the application.\nNow, export your images by following these steps:\n\nTurn the Template layer off and export the images as PNGs. \nIn the Export dialog, tick the \u201cUse Artboards\u201d checkbox and enter the range with your faces. \nExport at 72ppi to keep things running fast. \nSave your images into the images/ folder in your project.\n\nAdd your images to config.js\nOpen scripts/config.js. This is where you configure your extension. \nAdd key value pairs to the faces object. The key should be the person\u2019s name, and the value should be the filepath to the image.\nfaces: {\n leslie: 'images/face_leslie.png',\n kyle: 'images/face_kyle.png',\n beep: 'images/face_beep.png'\n}\nThe application will choose one of these options at random each time you open a new tab. This pattern is used for everything in the config file. You give the application groups of choices, and it chooses one at random each time it loads. The only thing that\u2019s special about the faces object is that person\u2019s name will also be displayed when their face is chosen.\nNow, when you refresh the project in Chrome, you should see one of your friends along with their name, like this:\n\nCongrats, you\u2019re off and running!\nStep 2: Add adjectives\nNow that you\u2019ve loaded your friends into the application, it\u2019s time to call them names. This step definitely yields the most laughs for the least amount of effort.\nAdd a list of adjectives into the prefixes array in config.js. To get the words flowing, I took inspiration from ways I might describe some of my relatives during a holiday gathering\u2026\nprefixes: [\n 'Loving',\n 'Drunk',\n 'Chatty',\n 'Merry',\n 'Creepy',\n 'Introspective',\n 'Cheerful',\n 'Awkward',\n 'Unrelatable',\n 'Hungry',\n ...\n]\nWhen you refresh Chrome, you should see one of these words prefixed before your friend\u2019s name. Voila!\n\nStep 3: Choose your color palette\nReal talk: I\u2019m bad at choosing color palettes, so I have a trick up my sleeve that I want to share with you. If you\u2019ve been blessed with the gift of color aptitude, skip ahead.\nHow to choose colors\nTo create a color palette, I start by going to a Coolors.co, and I hit the spacebar until I find a palette that I like. We need a wide gamut of hues for our palette, so lock down colors you like and keep hitting the spacebar until you find a nice, full range. You can use as many or as few colors as you like.\nCopy these colors into your swatches in Adobe Illustrator. They\u2019ll be the base for any illustrations you create later.\nNow you need a set of background colors. Here\u2019s my trick to making these consistent with your illustration palette without completely blending in. Use the \u201cAdjust Palette\u201d tool in Coolors to dial up the brightness a few notches, and the saturation down just a tad to remove any neon effect. These will be your background colors.\n\nAdd your background colors to config.js\nCopy your hex codes into the bgColors array in config.js.\nbgColors: [\n '#FFDD77',\n '#FF8E72',\n '#ED5E84',\n '#4CE0B3',\n '#9893DA',\n ...\n]\nNow when you go back to Chrome and refresh the page, you\u2019ll see your new palette!\n\nStep 4: Accessorize\nThis is the fun part. We\u2019re going to illustrate objects, accessories, lizards\u2014whatever you want\u2014and layer them on top of your friends.\nYour objects will be categorized into groups, and one option from each group will be randomly chosen each time you load the page. Think of a group like \u201chats\u201d or \u201cglasses\u201d. This will allow combinations of accessories to show at once, without showing two of the same type on the same person.\nCreate a group of accessories\nTo get started, open up Illustrator and create a new artboard out of the template. Think of a group of objects that you can riff on. I found hats to be a good place to start. If you don\u2019t feel like illustrating, you can use cut-out images instead.\n\nNext, follow the same steps as you did when you exported the faces. Here they are again:\n\nTurn the Template layer off and export the images as PNGs. \nIn the Export dialog, tick the \u201cUse Artboards\u201d checkbox and enter the range with your hats. \nExport at 72ppi to keep things running fast. \nSave your images into the images/ folder in your project.\n\nAdd your accessories to config.js\nIn config.js, add a new key to the customProps object that describes the group of accessories that you just created. Its value should be an array of the filepaths to your images. This is my hats array:\ncustomProps: {\n hats: [ \n 'images/hat_crown.png',\n 'images/hat_santa.png',\n 'images/hat_tophat.png',\n 'images/hat_antlers.png'\n ]\n}\nRefresh Chrome and behold, accessories!\n\nCreate as many more accessories as you want\nRepeat the steps above to create as many groups of accessories as you want. I went on to make glasses and hairstyles, so my final illustrator file looks like this:\n\nThe last step is adding your new groups to the config object. List your groups in the order that you want them to be stacked in the DOM. My final output will be hair, then hats, then glasses:\ncustomProps: {\n hair: [ \n 'images/hair_bowl.png',\n 'images/hair_bob.png'\n ], \n hats: [ \n 'images/hat_crown.png',\n 'images/hat_santa.png',\n 'images/hat_tophat.png',\n 'images/hat_antlers.png'\n ],\n glasses: [\n 'images/glasses_aviators.png',\n 'images/glasses_monacle.png'\n ]\n}\nAnd, there you have it! Randomly generated friends with random accessories. \n\nFeel free to go much crazier than I did. I considered adding a whole group of animals in celebration of the new season of Planet Earth, or even adding Sir David Attenborough himself, or doing a bit of role reversal and featuring the animals with little safari hats! But I digress\u2026\nStep 5: Publish it\nIt\u2019s time to put this in your new tabs! You have two options:\n\nPublish it as a Chrome extension in the Chrome Web Store.\nHost it as a website and point to it with the New Tab Redirect extension.\n\nToday, we\u2019re going to cover Option #1 because I want to show you how to make the simplest Chrome extension possible. However, I recommend Option #2 if you want to keep your project private. Every Chrome extension that you publish is made publicly available, so unless your friends want their faces published to an extension that anyone can use, I\u2019d suggest sticking to Option #2.\nHow to make a simple Chrome extension to replace the new tab page\nAll you need to do to make your project into a Chrome extension is add a manifest.json file to the root of your project with the following contents. There are plenty of other properties that you can add to your manifest file, but these are the only ones that are required for a new tab replacement:\n{\n \"manifest_version\": 2,\n \"name\": \"Your extension name\",\n \"version\": \"1.0\",\n \"chrome_url_overrides\" : {\n \"newtab\": \"index.html\"\n }\n}\nTo test your extension, you\u2019ll need to run it in Developer Mode. Here\u2019s how to do that:\n\nGo to the Extensions page in Chrome by navigating to chrome://extensions/.\nTick the checkbox in the upper-right corner labelled \u201cDeveloper Mode\u201d.\nClick \u201cLoad unpacked extension\u2026\u201d and select this project.\nIf everything is running smoothly, you should see your project when you open a new tab. If there are any errors, they should appear in a yellow box on the Extensions page.\n\nVoila! Like I said, this is a very light example of a Chrome extension, but Google has tons of great documentation on how to take things further. Check it out and see what inspires you.\nShare the love\nNow that you know how to make a new tab extension, go forth and create! But wield your power responsibly. New tabs are opened so often that they\u2019ve become a part of everyday life\u2013just consider how many tabs you opened today. Some people prefer to-do lists in their tabs, and others prefer cats. \nAt the end of the day, let\u2019s make something that makes us happy. Cheers!", "year": "2016", "author": "Leslie Zacharkow", "author_slug": "lesliezacharkow", "published": "2016-12-08T00:00:00+00:00", "url": "https://24ways.org/2016/how-to-make-a-chrome-extension/", "topic": "code"} {"rowid": 307, "title": "Get the Balance Right: Responsive Display Text", "contents": "Last year in 24 ways I urged you to Get Expressive with Your Typography. I made the case for grabbing your readers\u2019 attention by setting text at display sizes, that is to say big. You should consider very large text in the same way you might a hero image: a picture that creates an atmosphere and anchors your layout.\nWhen setting text to be read, it is best practice to choose body and subheading sizes from a pre-defined scale appropriate to the viewport dimensions. We set those sizes using rems, locking the text sizes together so they all scale according to the page default and your reader\u2019s preferences. You can take the same approach with display text by choosing larger sizes from the same scale.\nHowever, display text, as defined by its purpose and relative size, is text to be seen first, and read second. In other words a picture of text. When it comes to pictures, you are likely to scale all scene-setting imagery - cover photos, hero images, and so on - relative to the viewport. Take the same approach with display text: lock the size and shape of the text to the screen or browser window.\nIntroducing viewport units\nWith CSS3 came a new set of units which are locked to the viewport. You can use these viewport units wherever you might otherwise use any other unit of length such as pixels, ems or percentage. There are four viewport units, and in each case a value of 1 is equal to 1% of either the viewport width or height as reported in reference1 pixels:\n\nvw - viewport width,\nvh - viewport height,\nvmin - viewport height or width, whichever is smaller\nvmax - viewport height or width, whichever is larger\n\nIn one fell swoop you can set the size of a display heading to be proportional to the screen or browser width, rather than choosing from a scale in a series of media queries. The following makes the heading font size 13% of the viewport width:\nh1 {\n font-size: 13 vw;\n}\nSo for a selection of widths, the rendered font size would be:\nRendered font size (px)\nViewport width\n13\u202fvw\n320\n42\n768\n100\n1024\n133\n1280\n166\n1920\n250\n\nA problem with using vw in this manner is the difference in text block proportions between portrait and landscape devices. Because the font size is based on the viewport width, the text on a landscape display is far bigger than when rendered on the same device held in a portrait orientation. \nLandscape text is much bigger than portrait text when using vw units.\nThe proportions of the display text relative to the screen are so dissimilar that each orientation has its own different character, losing the inconsistency and considered design you would want when designing to make an impression.\nHowever if the text was the same size in both orientations, the visual effect would be much more consistent. This where vmin comes into its own. Set the font size using vmin and the size is now set as a proportion of the smallest side of the viewport, giving you a far more consistent rendering.\nh1 {\n font-size: 13vmin;\n}\nLandscape text is consistent with portrait text when using vmin units.\nComparing vw and vmin renderings for various common screen dimensions, you can see how using vmin keeps the text size down to a usable magnitude:\nRendered font size (px)\nViewport\n13\u202fvw\n13\u202fvmin\n320 \u00d7 480\n42\n42\n414 \u00d7 736\n54\n54\n768 \u00d7 1024\n100\n100\n1024 \u00d7 768\n133\n100\n1280 \u00d7 720\n166\n94\n1366 \u00d7 768\n178\n100\n1440 \u00d7 900\n187\n117\n1680 \u00d7 1050\n218\n137\n1920 \u00d7 1080\n250\n140\n2560 \u00d7 1440\n333\n187\n\nHybrid font sizing\nUsing vertical media queries to set text in direct proportion to screen dimensions works well when sizing display text. In can be less desirable when sizing supporting text such as sub-headings, which you may not want to scale upwards at the same rate as the display text. For example, we can size a subheading using vmin so that it starts at 16 px on smaller screens and scales up in the same way as the main heading:\nh1 {\n font-size: 13vmin;\n}\nh2 {\n font-size: 5vmin;\n}\nUsing vmin alone for supporting text can scale it too quickly\nThe balance of display text to supporting text on the phone works well, but the subheading text on the tablet, even though it has been increased in line with the main heading, is starting to feel disproportionately large and a little clumsy. This problem becomes magnified on even bigger screens.\nA solution to this is use a hybrid method of sizing text2. We can use the CSS calc() function to calculate a font size simultaneously based on both rems and viewport units. For example:\nh2 {\n font-size: calc(0.5rem + 2.5vmin);\n}\nFor a 320 px wide screen, the font size will be 16 px, calculated as follows:\n(0.5 \u00d7 16) + (320 \u00d7 0.025) = 8 + 8 = 16px\nFor a 768 px wide screen, the font size will be 27 px:\n(0.5 \u00d7 16) + (768 \u00d7 0.025) = 8 + 19 = 27px\nThis results in a more balanced subheading that doesn\u2019t take emphasis away from the main heading:\n\nTo give you an idea of the effect of using a hybrid approach, here\u2019s a side-by-side comparison of hybrid and viewport text sizing:\ntable.ex--scale{width:100%;overflow: hidden;} table.ex--scale td{vertical-align:baseline;text-align:center;padding:0} tr.ex--scale-key{color:#666} tr.ex--scale-key td{font-size:.875rem;padding:0 0.125em} .ex--scale-2 tr.ex--scale-size{color:#ccc} tr.ex--scale-size td{font-size:1em;line-height:.34em;padding-bottom:.5rem} td.ex--scale-step{color:#000} td.ex--scale-hilite{color:red} .ex--scale-3 tr.ex--scale-size td{line-height:.9em}\n\ntop: calc() hybrid method; bottom: vmin only\n16\n20\n27\n32\n35\n40\n44\n16\n24\n38\n48\n54\n64\n72\n320\n480\n768\n960\n1080\n1280\n1440\n\nOver this festive period, try experiment with the proportion of rem and vmin in your hybrid calculation to see what feels best for your particular setting.\n\n\n\n\nA reference pixel is based on the logical resolution of a device which takes into account double density screens such as Retina displays.\u00a0\u21a9\ufe0e\n\n\nFor even more sophisticated uses of hybrid text sizing see the work of Mike Riethmuller.\u00a0\u21a9\ufe0e", "year": "2016", "author": "Richard Rutter", "author_slug": "richardrutter", "published": "2016-12-09T00:00:00+00:00", "url": "https://24ways.org/2016/responsive-display-text/", "topic": "code"} {"rowid": 306, "title": "What next for CSS Grid Layout?", "contents": "In 2012 I wrote an article for 24 ways detailing a new CSS Specification that had caught my eye, at the time with an implementation only in Internet Explorer. What I didn\u2019t realise at the time was that CSS Grid Layout was to become a theme on which I would base the next four years of research, experimentation, writing and speaking. \nAs I write this article in December 2016, we are looking forward to CSS Grid Layout being shipped in Chrome and Firefox. What will ship early next year in those browsers is expanded and improved from the early implementation I explored in 2012. Over the last four years the spec has been developed as part of the CSS Working Group process, and has had input from browser engineers, specification writers and web developers. Use cases have been discussed, and features added.\nThe CSS Grid Layout specification is now a Candidate Recommendation. This status means the spec is to all intents and purposes, finished. The discussions now happening are on fine implementation details, and not new feature ideas. It makes sense to draw a line under a specification in order that browser vendors can ship complete, interoperable implementations. That approach is good for all of us, it makes development far easier if we know that a browser supports all of the features of a specification, rather than working out which bits are supported. However it doesn\u2019t mean that works stops here, and that new use cases and features can\u2019t be proposed for future levels of Grid Layout. Therefore, in this article I\u2019m going to take a look at some of the things I think grid layout could do in the future. I would love for these thoughts to prompt you to think about how Grid - or any CSS specification - could better suit the use cases you have.\nSubgrid - the missing feature of Level 1\nThe implementation of CSS Grid Layout in Chrome, Firefox and Webkit is comparable and very feature complete. There is however one standout feature that has not been implemented in any browser as yet - subgrid. Once you set the value of the display property to grid, any direct children of that element become grid items. This is similar to the way that flexbox behaves, set display: flex and all direct children become flex items. The behaviour does not apply to children of those items. You can nest grids, just as you can nest flex containers, but the child grids have no relationship to the parent.\n\nNesting Grids by Rachel Andrew (@rachelandrew) on CodePen.\nThe subgrid behaviour would enable the grid defined on the parent to be used by the children. I feel this would be most useful when working with a multiple column flexible grid - for example a typical 12 column grid. I could define a grid on a wrapper, then position UI elements on that grid - from the major structural elements of my page down through the child elements to a form where I wanted the field to line up with items above.\nThe specification contained an initial description of subgrid, with a value of subgrid for grid-template-columns and grid-template-rows, you can read about this in the August 2015 Working Draft. This version of the specification would have meant you could declare a subgrid in one dimension only, and create a different set of tracks in the other.\nIn an attempt to get some implementation of subgrid, a revised specification was proposed earlier this year. This gives a single subgrid value of the display property. As we now cannot specify a subgrid on rows OR columns this limits us to have a subgrid that works in two dimensions. At this point neither version has been implemented by anyone, and subgrids are marked as \u201cat risk\u201d in the Level 1 Candidate Recommendation. With regard to \u2018at-risk\u2019 this is explained as follows:\n\n\u201c\u2018At-risk\u2019 is a W3C Process term-of-art, and does not necessarily imply that the feature is in danger of being dropped or delayed. It means that the WG believes the feature may have difficulty being interoperably implemented in a timely manner, and marking it as such allows the WG to drop the feature if necessary when transitioning to the Proposed Rec stage, without having to publish a new Candidate Rec without the feature first.\u201d \n\nIf we lose subgrid from Level 1, as it looks likely that we will, this does give us a chance to further discuss and iterate on that feature. My current thoughts are that I\u2019m not completely happy about subgrids being tied to both dimensions and feel that a return to the earlier version, or something like it, would be preferable. \nFurther reading about subgrid\n\nMy post from 2015 detailing why I feel subgrid is important\nMy post based on the revised specification\nEric Meyer\u2019s thoughts on subgrid\nWrite-up of a discussion from Igalia who work on the Blink and Webkit browser implementations\n\nStyling cells, tracks and areas\nHaving defined a grid with CSS Grid Layout you can place child elements into that grid, however what you can\u2019t do is style the grid tracks or cells. Grid doesn\u2019t even go as far as multiple column layout, which has the column-rule properties.\nIn order to set a background colour on a grid cell at the moment you would have to add an empty HTML element or insert some generated content as in the below example. I\u2019m using a 1 pixel grid gap to fake lines between grid cells, and empty div elements, and some generated content to colour those cells.\n\nFaked backgrounds and borders by Rachel Andrew (@rachelandrew) on CodePen.\nI think it would be a nice addition to Grid Layout to be able to directly add backgrounds and borders to cells, tracks and areas. There is an Issue raised in the CSS WG Drafts repository for Decorative Grid Cell pseudo-elements, if you want to add thoughts to that.\nMore control over auto placement\nIf you haven\u2019t explicitly placed the direct children of your grid element they will be laid out according to the grid auto placement rules. You can see in this example how we have created a grid and the items are placing themselves into cells on that grid.\n\nItems auto-place on a defined grid by Rachel Andrew (@rachelandrew) on CodePen.\nThe auto-placement algorithm is very cool. We can position some items, leaving others to auto-place; we can set items to span more than one track; we can use the grid-auto-flow property with a value of dense to backfill gaps in our grid.\n\nWebsafe colors meet CSS Grid (auto-placement demo) by Rachel Andrew (@rachelandrew) on CodePen.\nI think however this could be taken further. In this issue posted to my CSS Grid AMA on GitHub, the question is raised as to whether it would be possible to ask grid to place items on the next available line of a certain name. This would allow you to skip tracks in the grid when using auto-placement, an issue that has also been raised by Emil Bj\u00f6rklund in this post to the www-style list prior to spec discussion moving to Github. I think there are probably similar issues, if you can think of one add a comment here.\nCreating non-rectangular grid areas\nA grid area is a collection of grid cells, defined by setting the start and end lines for columns and rows or by creating the area in the value of the grid-template-areas property as shown below. Those areas however must be rectangular - you can\u2019t create an L-shaped or otherwise non-regular shape.\n\nGrid Areas by Rachel Andrew (@rachelandrew) on CodePen.\n\nPerhaps in the future we could define an L-shape or other non-rectangular area into which content could flow, as in the below currently invalid code where a quote is embedded into an L-shaped content area.\n.wrapper {\n display: grid;\n grid-template-areas:\n \"sidebar header header\"\n \"sidebar content quote\"\n \"sidebar content content\";\n}\nFlowing content through grid cells or areas\nSome uses cases I have seen perhaps are not best solved by grid layout at all, but would involve grid working alongside other CSS specifications. As I detail in this post, there are a class of problems that I believe could be solved with the CSS Regions specification, or a revised version of that spec.\nBeing able to create a grid layout, then flow content through the areas could be very useful. Jen Simmons presented to the CSS Working Group at the Lisbon meeting a suggestion as to how this might work.\nIn a post from earlier this year I looked at a collection of ideas from specifications that include Grid, Regions and Exclusions. These working notes from my own explorations might prompt ideas of your own.\nSolving the keyboard/layout disconnect\nOne issue that grid, and flexbox to a lesser extent, raises is that it is very easy to end up with a layout that is disconnected from the underlying markup. This raises problems for people navigating using the keyboard as when tabbing around the document you find yourself jumping to unexpected places. The problem is explained by L\u00e9onie Watson with reference to flexbox in Flexbox and the keyboard navigation disconnect.\nThe grid layout specification currently warns against creating such a disconnect, however I think it will take careful work by web developers in order to prevent this. It\u2019s also not always as straightforward as it seems. In some cases you want the logical order to follow the source, and others it would make more sense to follow the visual. People are thinking about this issue, as you can read in this mailing list discussion.\nBringing your ideas to the future of Grid Layout\nWhen I\u2019m not getting excited about new CSS features, my day job involves working on a software product - the CMS that is serving this very website, Perch. When we launched Perch there were many use cases that we had never thought of, despite having a good idea of what might be needed in a CMS and thinking through lots of use cases. The additional use cases brought to our attention by our customers and potential customers informed the development of the product from launch. The same will be true for Grid Layout.\nAs a \u201cproduct\u201d grid has been well thought through by many people. Yet however hard we try there will be use cases we just didn\u2019t think of. You may well have one in mind right now. That\u2019s ok, because as with any CSS specification, once Level One of grid is complete, work can begin on Level Two. The feature set of Level Two will be informed by the use cases that emerge as people get to grips with what we have now.\nThis is where you get to contribute to the future of layout on the web. When you hit up against the things you cannot do, don\u2019t just mutter about how the CSS Working Group don\u2019t listen to regular developers and code around the problem. Instead, take a few minutes and write up your use case. Post it to your blog, to Medium, create a CodePen and go to the CSS Working Group GitHub specs repository and post an issue there. Write some pseudo-code, draw a picture, just make sure that the use case is described in enough detail that someone can see what problem you want grid to solve. It may be that - as with any software development - your use case can\u2019t be solved in exactly the way you suggest. However once we have a use case, collected with other use cases, methods of addressing that class of problems can be investigated. \nI opened this article by explaining I\u2019d written about grid layout four years ago, and how we\u2019re only now at a point where we will have Grid Layout available in the majority of browsers. Specification development, and implementation into browsers takes time. This is actually a good thing, as it\u2019s impossible to take back CSS once it is out there and being used by production websites. We want CSS in the wild to be well thought through and that takes time. So don\u2019t feel that because you don\u2019t see your use case added to a spec immediately it has been ignored. Do your future self a favour and write down your frustrations or thoughts, and we can all make sure that the web platform serves the use cases we\u2019re dealing with now and in the future.", "year": "2016", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2016-12-12T00:00:00+00:00", "url": "https://24ways.org/2016/what-next-for-css-grid-layout/", "topic": "code"} {"rowid": 305, "title": "CSS Writing Modes", "contents": "Since you may not have a lot of time, I\u2019m going to start at the end, with the dessert.\nYou can use a little-known, yet important and powerful CSS property to make text run vertically. Like this.\n\nOr instead of running text vertically, you can layout a set of icons or interface buttons in this way. Or, of course, with anything on your page. \nThe CSS I\u2019ve applied makes the browser rethink the orientation of the world, and flow the layout of this element at a 90\u00b0 angle to \u201cnormal\u201d. Check out the live demo, highlight the headline, and see how the cursor is now sideways.\nSee the Pen Writing Mode Demo \u2014 Headline by Jen Simmons (@jensimmons) on CodePen.\n\nThe code for accomplishing this is pretty simple. \nh1 { \n writing-mode: vertical-rl;\n}\nThat\u2019s all it takes to switch the writing mode from the web\u2019s default horizontal top-to-bottom mode to a vertical right-to-left mode. If you apply such code to the html element, the entire page is switched, affecting the scroll direction, too. \nIn my example above, I\u2019m telling the browser that only the h1 will be in this vertical-rl mode, while the rest of my page stays in the default of horizontal-tb.\nSo now the dessert course is over. Let me serve up this whole meal, and explain the the CSS Writing Mode Specification.\nWhy learn about writing modes?\nThere are three reasons I\u2019m teaching writing modes to everyone\u2014including western audiences\u2014and explaining the whole system, instead of quickly showing you a simple trick.\n\n\nWe live in a big, diverse world, and learning about other languages is fascinating. Many of you lay out pages in languages like Chinese, Japanese and Korean. Or you might be inspired to in the future.\n\n\nUsing writing-mode to turn bits sideways is cool. This CSS can be used in all kinds of creative ways, even if you are working only in English.\n\nMost importantly, I\u2019ve found understanding Writing Modes incredibly helpful when understanding Flexbox and CSS Grid. Before I learned Writing Mode, I felt like there was still a big hole in my knowledge, something I just didn\u2019t get about why Grid and Flexbox work the way they do. Once I wrapped my head around Writing Modes, Grid and Flexbox got a lot easier. Suddenly the Alignment properties, align-* and justify-*, made sense.\n\nWhether you know about it or not, the writing mode is the first building block of every layout we create. You can do what we\u2019ve been doing for 25 years \u2013 and leave your page set to the default left-to-right direction, horizontal top-to-bottom writing mode. Or you can enter a world of new possibilities where content flows in other directions.\nCSS properties\nI\u2019m going to focus on the CSS writing-mode property in this article. It has five possible options:\n writing-mode: horizontal-tb;\n writing-mode: vertical-rl;\n writing-mode: vertical-lr;\n writing-mode: sideways-rl;\n writing-mode: sideways-lr;\nThe CSS Writing Modes Specification is designed to support a wide range of written languages in all our human and linguistic complexity. Which\u2014spoiler alert\u2014is pretty insanely complex. The global evolution of written languages has been anything but simple. \nSo I\u2019ve got to start with explaining some basic concepts of web page layout and writing systems. Then I can show you what these CSS properties do. \nInline Direction, Block Direction, and Character Direction\nIn the world of the web, there\u2019s a concept of \u2018block\u2019 and \u2018inline\u2019 layout. If you\u2019ve ever written display: block or display: inline, you\u2019ve leaned on these concepts. \nIn the default writing mode, blocks stack vertically starting at the top of the page and working their way down. Think of how a bunch of block-levels elements stack\u2014like a bunch of a paragraphs\u2014that\u2019s the block direction. \n\nInline is how each line of text flows. The default on the web is from left to right, in horizontal lines. Imagine this text that you are reading right now, being typed out one character at a time on a typewriter. That\u2019s the inline direction. \n\nThe character direction is which way the characters point. If you type a capital \u201cA\u201d for instance, on which side is the top of the letter? Different languages can point in different directions. Most languages have their characters pointing towards the top of the page, but not all.\n\nPut all three together, and you start to see how they work as a system. \nThe default settings for the web work like this.\nNow that we know what block, inline, and character directions mean, let\u2019s see how they are used in different writing systems from around the world.\nThe four writing systems of CSS Writing Modes\nThe CSS Writing Modes Specification handles all the use cases for four major writing systems; Latin, Arabic, Han and Mongolian. \nLatin-based systems\nOne writing system dominates the world more than any other, reportedly covering about 70% of the world\u2019s population. \n\nThe text is horizontal, running from left to right, or LTR. The block direction runs from top to bottom. \nIt\u2019s called the Latin-based system because it includes all languages that use the Latin alphabet, including English, Spanish, German, French, and many others. But there are many non-Latin-alphabet languages that also use this system, including Greek, Cyrillic (Russian, Ukrainian, Bulgarian, Serbian, etc.), and Brahmic scripts (Devanagari, Thai, Tibetan), and many more.\nYou don\u2019t need to do anything in your CSS to trigger this mode. This is the default. \nBest practices, however, dictate that you declare in your opening element which language and which direction (LTR or RTL) you are using. This website, for instance, uses to let the browser know this content is published in Great Britian\u2019s version of English, in a left to right direction. \nArabic-based systems\nArabic, Hebrew and a few other languages run the inline direction from right to left. This is commonly known as RTL. \nNote that the inline direction still runs horizontally. The block direction runs from top to bottom. And the characters are upright.\n\nIt\u2019s not just the flow of text that runs from right to left, but everything about the layout of the website. The upper right-hand corner is the starting position. Important things are on the right. The eyes travel from right to left. So, typically RTL websites use layouts that are just like LTR websites, only flipped.\nOn websites that support both LTR and RTL, like the United Nations\u2019 site at un.org, the two layouts are mirror images of each other.\nFor many web developers, our experiences with internationalization have focused solely on supporting Arabic and Hebrew script. \nCSS layout hacks for internationalization & RTL\nTo prepare an LTR project to support RTL, developers have had to create all sorts of hacks. For example, the Drupal community started a convention of marking every margin-left and -right, every padding-left and -right, every float: left and float: right with the comment /* LTR */. Then later developers could search for each instance of that exact comment, and create stylesheets to override each left with right, and vice versa. It\u2019s a tedious and error prone way to work. CSS itself needed a better way to let web developers write their layout code once, and easily switch language directions with a single command.\nOur new CSS layout system does exactly that. Flexbox, Grid and Alignment use start and end instead of left and right. This lets us define everything in relationship to the writing system, and switch directions easily. By writing justify-content: flex-start, justify-items: end, and eventually margin-inline-start: 1rem we have code that doesn\u2019t need to be changed. \nThis is a much better way to work. I know it can be confusing to think through start and end as replacements for left and right. But it\u2019s better for any multiligual project, and it\u2019s better for the web as a whole.\nSadly, I\u2019ve seen CSS preprocessor tools that claim to \u201cfix\u201d the new CSS layout system by getting rid of start and end and bringing back left and right. They want you to use their tool, write justify-content: left, and feel self-righteous. It seems some folks think the new way of working is broken and should be discarded. It was created, however, to fulfill real needs. And to reflect a global internet. As Bruce Lawson says, WWW stands for the World Wide Web, not the Wealthy Western Web. Please don\u2019t try to convince the industry that there\u2019s something wrong with no longer being biased towards western culture. Instead, spread the word about why this new system is here. \nSpend a bit of time drilling the concept of inline and block into your head, and getting used to start and end. It will be second nature soon enough. \nI\u2019ve also seen CSS preprocessors that let us use this new way of thinking today, even as all the parts aren\u2019t fully supported by browsers yet. Some tools let you write text-align: start instead of text-align: left, and let the preprocessor handle things for you. That is terrific, in my opinion. A great use of the power of a preprocessor to help us switch over now. \nBut let\u2019s get back to RTL. \nHow to declare your direction\nYou don\u2019t want to use CSS to tell the browser to switch from an LTR language to RTL. You want to do this in your HTML. That way the browser has the information it needs to display the document even if the CSS doesn\u2019t load.\nThis is accomplished mainly on the html element. You should also declare your main language. As I mentioned above, the 24 ways website is using to declare the LTR direction and the use of British English. The UN Arabic website uses to declare the site as an Arabic site, using a RTL layout. \nThings get more complicated when you\u2019ve got a page with a mix of languages. But I\u2019m not going to get into all of that, since this article is focused on CSS and layouts, not explaining everything about internationalization. \nLet me just leave direction here by noting that much of the heavy work of laying out the characters which make up each word is handled by Unicode. If you are interested in learning more about LTR, RTL and bidirectional text, watch this video: Introduction to Bidirectional Text, a presentation by Elika Etemad. \nMeanwhile, let\u2019s get back to CSS.\nThe writing mode CSS for Latin-based and Arabic-based systems\nFor both of these systems\u2014Latin-based and Arabic-based, whether LTR or RTL\u2014the same CSS property applies for specifying the writing mode: writing-mode: horizontal-tb. That\u2019s because in both systems, the inline text flow is horizontal, while the block direction is top-to-bottom. This is expressed as horizontal-tb.\nhorizontal-tb is the default writing mode for the web, so you don\u2019t need to specify it unless you are overriding something else higher up in the cascade. You can just imagine that every site you\u2019ve ever built came with:\nhtml {\n writing-mode: horizontal-tb;\n}\nNow let\u2019s turn our attention to the vertical writing systems. \nHan-based systems\nThis is where things start to get interesting. \nHan-based writing systems include CJK languages, Chinese, Japanese, Korean and others. There are two options for laying out a page, and sometimes both are used at the same time.\nMuch of CJK text is laid out like Latin-based languages, with a horizontal top-to-bottom block direction, and a left-to-right inline direction. This is the more modern way to doing things, started in the 20th century in many places, and further pushed into domination by the computer and later the web. \nThe CSS to do this bit of the layouts is the same as above:\nsection {\n writing-mode: horizontal-tb;\n}\nOr, you know, do nothing, and get that result as a default. \nAlternatively Han-based languages can be laid out in a vertical writing mode, where the inline direction runs vertically, and the block direction goes from right to left. \nSee both options in this diagram:\n\nNote that the horizontal text flows from left to right, while the vertical text flows from right to left. Wild, eh? \nThis Japanese issue of Vogue magazine is using a mix of writing modes. The cover opens on the left spine, opposite of what an English magazine does. \n\nThis page mixes English and Japanese, and typesets the Japanese text in both horizontal and vertical modes. Under the title \u201cRichard Stark\u201d in red, you can see a passage that\u2019s horizontal-tb and LTR, while the longer passage of text at the bottom of the page is typeset vertical-rl. The red enlarged cap marks the beginning of that passage. The long headline above the vertical text is typeset LTR, horizontal-tb.\n\nThe details of how to set the default of the whole page will depend on your use case. But each element, each headline, each section, each article can be marked to flow the opposite of the default however you\u2019d like.\nFor example, perhaps you leave the default as horizontal-tb, and specify your vertical elements like this:\ndiv.articletext {\n writing-mode: vertical-rl;\n}\nOr alternatively you could change the default for the page to a vertical orientation, and then set specific elements to horizontal-tb, like this:\nhtml { \n writing-mode: vertical-rl;\n}\nh2, .photocaptions, section {\n writing-mode: horizontal-tb;\n}\nIf your page has a sideways scroll, then the writing mode will determine whether the page loads with upper left corner as the starting point, and scroll to the right (horizontal-tb as we are used to), or if the page loads with the upper right corner as the starting point, scrolling to the left to display overflow. Here\u2019s an example of that change in scrolling direction, in a CSS Writing Mode demo by Chen Hui Jing. Check out her demo \u2014 you can switch from horizontal to vertical writing modes with a checkbox and see the difference. \n\nMongolian-based systems\nNow, hopefully so far all of this kind of makes sense. It might be a bit more complicated than expected, but it\u2019s not so hard. Well, enter the Mongolian-based systems.\nMongolian is also a vertical script language. Text runs vertically down the page. Just like Han-based systems. There are two major differences. First, the block direction runs the other way. In Mongolian, block-level elements stack from left to right. \nHere\u2019s a drawing of how Wikipedia would look in Mongolian if it were laid out correctly.\nPerhaps the Mongolian version of Wikipedia will be redone with this layout.\nNow you might think, that doesn\u2019t look so weird. Tilt your head to the left, and it\u2019s very familiar. The block direction starts on the left side of the screen and goes to the right. The inline direction starts on the top of the page and moves to the bottom (similar to RTL text, just turned 90\u00b0 counter-clockwise). But here comes the other huge difference. The character direction is \u201cupside down\u201d. The top of the Mongolian characters are not pointing to the left, towards the start edge of the block direction. They point to the right. Like this:\n\nNow you might be tempted to ignore all this. Perhaps you don\u2019t expect to be typesetting Mongolian content anytime soon. But here\u2019s why this is important for everyone \u2014 the way Mongolian works defines the results writing-mode: vertical-lr. And it means we cannot use vertical-lr for typesetting content in other languages in the way we might otherwise expect. \nIf we took what we know about vertical-rl and guessed how vertical-lr works, we might imagine this:\n\nBut that\u2019s wrong. Here\u2019s how they actually compare:\n\nSee the unexpected situation? In both writing-mode: vertical-rl and writing-mode: vertical-lr latin text is rotated clockwise. Neither writing mode let\u2019s us rotate text counter-clockwise. \nIf you are typesetting Mongolian content, apply this CSS in the same way you would apply writing-mode to Han-based writing systems. To the whole page on the html element, or to specific pages of the page like this:\nsection {\n writing-mode: vertical-lr;\n}\nNow, if you are using writing-mode for a graphic design effect on a language that is otherwise typesets horizontally, I don\u2019t think writing-mode: vertical-lr is useful. If the text wraps onto two lines, it stacks in a very unexpected way. So I\u2019ve sort of obliterated it from my toolkit. I find myself using writing-mode: vertical-rl a lot. And never using -lr. Hm.\nWriting modes for graphic design\nSo how do we use writing-mode to turn English headlines sideways? We could rely on transform: rotate()\nHere are two examples, one for each direction. (By the way, each of these demos use CSS Grid for their overall layout, so be sure to test them in a browser that supports CSS Grid, like Firefox Nightly.)\n\nIn this demo 4A, the text is rotated clockwise using this code: \nh1 {\n writing-mode: vertical-rl;\n}\n\nIn this demo 4B, the text is rotated counter-clockwise using this code: \nh1 {\n writing-mode: vertical-rl;\n transform: rotate(180deg);\n text-align: right;\n}\nI use vertical-rl to rotate the text so that it takes up the proper amount of space in the overall flow of the layout. Then I rotate it 180\u00b0 to spin it around to the other direction. And then I use text-align: right to get it to rise up to the top of it\u2019s container. This feels like a hack, but it\u2019s a hack that works.\nNow what I would like to do instead is use another CSS value that was designed for this use case \u2014 one of the two other options for writing mode.\nIf I could, I would lay out example 4A with:\nh1 {\n writing-mode: sideways-rl;\n}\nAnd layout example 4B with: \nh1 {\n writing-mode: sideways-lr;\n}\nThe problem is that these two values are only supported in Firefox. None of the other browsers recognize sideways-*. Which means we can\u2019t really use it yet. \nIn general, the writing-mode property is very well supported across browsers. So I\u2019ll use writing-mode: vertical-rl for now, with the transform: rotate(180deg); hack to fake the other direction. \nThere\u2019s much more to what we can do with the CSS designed to support multiple languages, but I\u2019m going to stop with this intermediate introduction. \nIf you do want a bit more of a taste, look at this example that adds text-orientation: upright; to the mix \u2014 turning the individual letters of the latin font to be upright instead of sideways.\n\nIt\u2019s this demo 4C, with this CSS applied: \nh1 {\n writing-mode: vertical-rl;\n text-orientation: upright;\n text-transform: uppercase;\n letter-spacing: -25px;\n}\nYou can check out all my Writing Modes demos at labs.jensimmons.com/#writing-modes. \n\nI\u2019ll leave you with this last demo. One that applies a vertical writing mode to the sub headlines of a long article. I like how small details like this can really bring a fresh feeling to the content. \nSee the Pen Writing Mode Demo \u2014 Article Subheadlines by Jen Simmons (@jensimmons) on CodePen.", "year": "2016", "author": "Jen Simmons", "author_slug": "jensimmons", "published": "2016-12-23T00:00:00+00:00", "url": "https://24ways.org/2016/css-writing-modes/", "topic": "code"} {"rowid": 304, "title": "Five Lessons From My First 18 Months as a Dev", "contents": "I recently moved from Sydney to London to start a dream job with Twitter as a software engineer. A software engineer! Who would have thought.\nHaving started my career as a journalist, the title \u2018engineer\u2019 is very strange to me. The notion of writing in first person is also very strange. Journalists are taught to be objective, invisible, to keep yourself out of the story. And here I am writing about myself on a public platform. Cringe.\nSince I started learning to code I\u2019ve often felt compelled to write about my experience. I want to share my excitement and struggles with the world! But as a junior I\u2019ve been held back by thoughts like \u2018whatever you have to say won\u2019t be technical enough\u2019, \u2018any time spent writing a blog would be better spent writing code\u2019, \u2018blogging is narcissistic\u2019, etc.\u00a0\nWell, I\u2019ve been told that your thirties are the years where you stop caring so much about what other people think. And I\u2019m almost 30. So here goes!\nThese are five key lessons from my first year and a half in tech:\nDeployments should delight, not dread \n\nLesson #1: Making your deployment process as simple as possible is worth the investment.\n\nIn my first dev job, I dreaded deployments. We would deploy every Sunday night at 8pm. Preparation would begin the Friday before. A nominated deployment manager would spend half a day tagging master, generating scripts, writing documentation and raising JIRAs. The only fun part was choosing a train gif to post in HipChat: \u2018All aboard! The deployment train leaves in 3, 2, 1\u2026\u201d\n\nWhen Sunday night came around, at least one person from every squad would need to be online to conduct smoke tests. Most times, the deployments would succeed. Other times they would fail. Regardless, deployments ate into people\u2019s weekend time\u200a\u2014\u200aand they were intense. Devs would rush to have their code approved before the Friday cutoff. Deployment managers who were new to the process would fear making a mistake.\u00a0\nThe team knew deployments were a problem. They were constantly striving to improve them. And what I\u2019ve learnt from Twitter is that when they do, their lives will be bliss.\nTweetDeck\u2019s deployment process fills me with joy and delight. It\u2019s quick, easy and stress free. In fact, it\u2019s so easy I deployed code on my first day in the job! Anyone can deploy, at any time of day, with a single command. Rollbacks are just as simple. There\u2019s no rush to make the deployment train. No manual preparation. No fuss. Value\u200a\u2014\u200awhether in the form of big new features, simple UI improvements or even production bug fixes\u200a\u2014\u200acan be shipped in an instant. The team assures me the process wasn\u2019t always like this. They invested lots of time in making their deployments better. And it\u2019s clearly paid off.\nCode reviews need love, time and acceptance \n\nLesson #2: Code reviews are a three-way gift. Every time I review someone else\u2019s code, I help them, the team and myself.\n\nCode reviews were another pain point in my previous job. And to be honest, I was part of the problem. I would raise code reviews that were far too big. They would take days, sometimes weeks, to get merged. One of my reviews had 96 comments! I would rarely review other people\u2019s code because I felt too junior, like my review didn\u2019t carry any weight.\u00a0\nThe review process itself was also tiring, and was often raised in retrospectives as being slow. In order for code to be merged it needed to have ticks of approval from two developers and a third tick from a peer tester. It was the responsibility of the author to assign the reviewers and tester. It was felt that if it was left to team members to assign themselves to reviews, the \u201csomeone else will do it\u201d mentality would kick in, and nothing would get done.\nAt TweetDeck, no-one is specifically assigned to reviews. Instead, when a review is raised, the entire team is notified. Without fail, someone will jump on it. Reviews are seen as blocking. They\u2019re seen to be equally, if not more important, than your own work. I haven\u2019t seen a review sit for longer than a few hours without comments.\u00a0\nWe also don\u2019t work on branches. We push single commits for review, which are then merged to master. This forces the team to work in small, incremental changes. If a review is too big, or if it\u2019s going to take up more than an hour of someone\u2019s time, it will be sent back.\nWhat I\u2019ve learnt so far at Twitter is that code reviews must be small. They must take priority. And they must be a team effort. Being a new starter is no \u201cget out of jail free card\u201d. In fact, it\u2019s even more of a reason to be reviewing code. Reviews are a great way to learn, get across the product and see different programming styles. If you\u2019re like me, and find code reviews daunting, ask to pair with a senior until you feel more confident. I recently paired with my mentor at Twitter and found it really helpful.\nGet friendly with feature flagging \n\nLesson #3: Feature flagging gives you complete control over how you build and release a project.\n\nSay you\u2019re implementing a new feature. It\u2019s going to take a few weeks to complete. You\u2019ll complete the feature in small, incremental changes. At what point do these changes get merged to master? At what point do they get deployed? Do you start at the back end and finish with the UI, so the user won\u2019t see the changes until they\u2019re ready? With feature flagging\u200a\u2014\u200ait doesn\u2019t matter. In fact, with feature flagging, by the time you are ready to release your feature, it\u2019s already deployed, sitting happily in master with the rest of your codebase.\u00a0\nA feature flag is a boolean value that gets wrapped around the code relating to the thing you\u2019re working on. The code will only be executed if the value is true.\nif (TD.decider.get(\u2018new_feature\u2019)) {\n //code for new feature goes here\n}\nIn my first dev job, I deployed a navigation link to the feature I\u2019d been working on, making it visible in the product, even though the feature wasn\u2019t ready. \u201cWhy didn\u2019t you use a feature flag?\u201d a senior dev asked me. An honest response would have been: \u201cBecause they\u2019re confusing to implement and I don\u2019t understand the benefits of using them.\u201d The fix had to wait until the next deployment.\nThe best thing about feature flagging at TweetDeck is that there is no need to deploy to turn on or off a feature. We set the status of the feature via an interface called Deckcider, and the code makes regular API requests to get the status.\u00a0\nAt TweetDeck we are also able to roll our features out progressively. The first rollout might be to a staging environment. Then to employees only. Then to 10 per cent of users, 20 per cent, 30 per cent, and so on. A gradual rollout allows you to monitor for bugs and unexpected behaviour, before releasing the feature to the entire user base.\nSometimes a piece of work requires changes to existing business logic. So the code might look more like this:\nif (TD.decider.get(\u2018change_to_existing_feature\u2019)) {\n //new logic goes here\n} else {\n //old logic goes here\n}\nThis seems messy, right? Riddling your code with if else statements to determine which path of logic should be executed, or which version of the UI should be displayed. But at Twitter, this is embraced. You can always clean up the code once a feature is turned on. This isn\u2019t essential, though. At least not in the early days. When a cheeky bug is discovered, having the flag in place allows the feature to be very quickly turned off again. \nLet data and experimentation drive development \n\nLesson #4: Use data to determine the direction of your product and measure its success.\n\nThe first company I worked for placed a huge amount of emphasis on data-driven decision making. If we had an idea, or if we wanted to make a change, we were encouraged to \u201cbring data\u201d to show why it was necessary. \u201cWithout data, you\u2019re just another person with an opinion,\u201d the chief data scientist would say. This attitude helped to ensure we were building the right things for our customers. Instead of just plucking a new feature out of thin air, it was chosen based on data that reflected its need.\nBut how do you design that feature? How do you know that the design you choose will have the desired impact? That\u2019s where experiments come into play.\u00a0\nAt TweetDeck we make UI changes that we hope will delight our users. But the assumptions we make about our users are often wrong. Our front-end team recently sat in a room and tried to guess which UIs from A/B tests had produced better results. Half the room guessed incorrectly every time.\nWe can\u2019t assume a change we want to make will have the impact we expect. So we run an experiment. Here\u2019s how it works. Users are placed into buckets. One bucket of users will have access to the new feature, the other won\u2019t. We hypothesise that the bucket exposed to the new feature will have better results. The beauty of running an experiment is that we\u2019ll know for sure. Instead of blindly releasing the feature to all users without knowing its impact, once the experiment has run its course, we\u2019ll have the data to make decisions accordingly.\nHire the developer, not the degree\n\nLesson #5: Testing candidates on real world problems will allow applicants from all backgrounds to shine.\n\nSurely, a company like Twitter would give their applicants insanely difficult code tests, and the toughest technical questions, that only the cleverest CS graduates could pass, I told myself when applying for the job. Lucky for me, this wasn\u2019t the case. The process was insanely difficult\u2014don\u2019t get me wrong\u2014but the team at TweetDeck gave me real world problems to solve.\nThe first code test involved bug fixes, performance and testing. The second involved DOM traversal and manipulation. Instead of being put on the spot in a room with a whiteboard and pen I was given a task, access to the internet, and time to work on it. Similarly, in my technical interviews, I was asked to pair program on real world problems that I was likely to face on the job.\nIn one of my phone screenings I was told Twitter wanted to increase diversity in its teams. Not just gender diversity, but also diversity of experience and background. Six months later, with a bunch of new hires, team lead Tom Ashworth says TweetDeck has the most diverse team it\u2019s ever had. \u201cWe designed an interview process that gave us a way to simulate the actual job,\u201d he said. \u201cIt\u2019s not about testing whether you learnt an algorithm in school.\u201d\nIs this lowering the bar? No. The bar is whether a candidate has the ability to solve problems they are likely to face on the job. I recently spoke to a longstanding Atlassian engineer who said they hadn\u2019t seen an algorithm in their seven years at the company.\nThese days, only about 50 per cent of developers have computer science degrees. The majority of developers are self taught, learn on the job or via online courses. If you want to increase diversity in your engineering team, ensure your interview process isn\u2019t excluding these people.", "year": "2016", "author": "Amy Simmons", "author_slug": "amysimmons", "published": "2016-12-20T00:00:00+00:00", "url": "https://24ways.org/2016/my-first-18-months-as-a-dev/", "topic": "process"} {"rowid": 303, "title": "We Need to Talk About Technical Debt", "contents": "In my work with clients, a lot of time is spent assessing old, legacy, sprawling systems and identifying good code, bad code, and technical debt.\nOne thing that constantly strikes me is the frequency with which bad code and technical debt are conflated, so let me start by saying this:\nNot all technical debt is bad code, and not all bad code is technical debt.\nSometimes your bad code is just that: bad code. Calling it technical debt often feels like a more forgiving and friendly way of referring to what may have just been a poor implementation or a substandard piece of work.\nIt is an oft-misunderstood phrase, and when mistaken for meaning \u2018anything legacy or old hacky or nasty or bad\u2019, technical debt is swept under the carpet along with all of the other parts of the codebase we\u2019d rather not talk about, and therein lies the problem.\nWe need to talk about technical debt.\nWhat We Talk About When We Talk About Technical Debt\nThe thing that separates technical debt from the rest of the hacky code in our project is the fact that technical debt, by definition, is something that we knowingly and strategically entered into. Debt doesn\u2019t happen by accident: debt happens when we choose to gain something otherwise-unattainable immediately in return for paying it back (with interest) later on.\nAn Example\nYou\u2019re a front-end developer working on a SaaS product, and your sales team is courting a large customer \u2013 a customer so large that you can\u2019t really afford to lose them. The customer tells you that as long as you can allow them to theme your SaaS application according to their branding, they are willing to sign on the dotted line\u2026 the problem being that your CSS architecture was never designed to incorporate theming at all, and there isn\u2019t currently a nice, clean way to incorporate a theme into the codebase.\nYou and the business make the decision that you will hack a theme into the product in two days. It\u2019s going to be messy, it\u2019s going to be ugly, but you can\u2019t afford to lose a huge customer just because your CSS isn\u2019t quite right, right now. This is technical debt.\nYou deliver the theme, the customer signs up, and everyone is happy. Except you (and the business, because you are one and the same) have a decision to make:\n\nDo we go back and build theming into the CSS architecture as a first-class citizen, porting the hacked theme back into a codified and formal framework?\nDo we carry on as we are? Things are working okay, and the customer paid up, so is there any reason to invest time and effort into things after we (and the customer) got what we wanted?\n\nOption 1 is choosing to pay off your debts; Option 2 is ignoring your repayments.\nWith Option 1, you\u2019re acknowledging that you did what you could given the constraints, but, free of constraints, you\u2019d have done something different. Now, you are choosing to implement that something different.\nWith Option 2, however, you are avoiding your responsibility to repay your debt, and you are letting interest accrue. The problem here is that\u2026\n\nyour SaaS product now offers theming to one of your customers;\nanother potential customer might also demand the ability to theme their instance of your product;\nyou can\u2019t refuse them that request, nor can you quickly fulfil it;\nyou hack in another theme, thus adding to the balance of your existing debt;\nand so on (plus interest) for every subsequent theme you need to implement.\n\nHere you have increased entropy whilst making little to no attempt to address what you already knew to be problems.\nYour second, third, fourth, fifth request for theming will be hacked on top of your hack, further accumulating debt whilst offering nothing by way of a repayment. After a long enough period, the code involved will get so unwieldy, so hard to work with, that you are forced to tear it all down and start again, and the most painful part of this is that you\u2019re actually paying off even more than your debt repayments would have been in the first place. Two days of hacking plus, say, five days of subsequent refactoring, would still have been substantially less than the weeks you will now have to spend rewriting your CSS to fix and incorporate the themes properly. You\u2019ve made a loss; your strategic debt ultimately became a loss-making exercise.\nThe important thing to note here is that you didn\u2019t necessarily write bad code. You knew there were two options: the quick way and the correct way. The decision to take the quick route was a definite choice, because you knew there was a better way. Implementing the better way is your repayment.\nGood Debt and Bad Debt\nTechnical debt is acceptable as long as you have intentions to settle; it can be a valuable solution to a business problem, provided the right approach is taken afterwards. That doesn\u2019t, however, mean that all debt is born equal. Just as in real life, there is good debt and there is bad debt.\nGood debt might be\u2026\n\na mortgage;\na student loan, or;\na business loan.\n\nThese are types of debt that will secure you the means of repaying them. These are well considered debts whose very reason for being will allow you to make the money to pay them off\u2014they have real, tangible benefit.\nA business loan to secure some equipment and premises will allow you to start an enterprise whose revenue will allow you to pay that debt back; a student loan will allow you to secure the kind of job that has the ability to pay a student loan back.\nThese kinds of debt involve a considered and well-balanced decision to acquire something in the short term in the knowledge that you will have the means, in the long term, to pay it back.\nConversely, bad debt might be\u2026\n\nborrowing $1,000 from a loan shark so you can go to Vegas, or;\ntaking out a payday loan in order to buy a new television.\n\nBoth of these kinds of debt will leave you paying for things that didn\u2019t provide you a way of earning your own capital. That is to say, the loans taken did not secure anything that would help pay off said loans. These are bad debts that will usually provide a net loss. You really are only gaining the short term in exchange for a long term financial responsibility: i.e., was it worth it?\nA good litmus test for debt is to compare the gains of its immediate benefit with the cost of its long term commitment.\nThe earlier example of theming a site is a good debt, provided we are keeping up our repayments (all debt is bad debt if you don\u2019t). A calculated decision to do something \u2018wrong\u2019 in the short term with the promise of better payoffs later on.\nBad Technical Debt\nThe majority of my work is with front-end development teams\u2014CSS is what I do. To that end, the most succinct example of technical debt for that audience is simply:\n!important\nAll front-end developers know the horrors and dangers associated with using !important, yet we continue to use it. Why?\nIt\u2019s not necessarily because we\u2019re bad developers, but because we see a shortcut. !important is usually implemented as a quick way out of a sticky specificity situation. We could spend the rest of the day refactoring our CSS to fix the issue at its source, or we can spend mere seconds typing the word !important and patch over the symptoms.\nThis is us making an explicit decision to do something less than ideal now in exchange for immediate benefit. After all, refactoring our CSS will take a lot more time, and will still only leave us with the same outcome that the vastly quicker !important solution will, so it seems to make better business sense.\nHowever, this is a bad debt. !important takes seconds to implement but weeks to refactor. The cost of refactoring this back out later will be an order of magnitude higher than it would be to have done things properly the first time. The first !important usually sets a precedent, and subsequent developers are likely to have to use it themselves in order to get around the one that you left.\nSo many CSS projects deteriorate because of this one simple word, and rewrites become more and more imminent. That makes it possibly the most costly 10 bytes a CSS developer could ever write.\nBad Code\nNow we\u2019ve got a good idea of what constitutes technical debt, let\u2019s take a look at what constitutes bad code. Something I hear time and time again in my client work goes a little like this:\n\nWe\u2019ve amassed a lot of technical debt and we\u2019d like to get a strategy in place\nto begin dealing with it.\n\nWhilst I genuinely admire their willingness to identify and desire to fix problems in their code, sometimes they\u2019re not looking at technical debt at\nall\u2014sometimes they\u2019re just looking at bad code, plain and simple.\nWhere technical debt is knowing that there\u2019s a better way, but the quicker way makes more sense right now, bad code is not caring if there\u2019s a better way at all.\nAgain, looking at a CSS-specific world, a lot of bad code is contributed by non-front-end developers with little training, appreciation, or even respect for the front-end landscape. Writing code with reckless abandon should not be described as technical debt, because to do so would imply that\u2026\n\nthe developers knew they were implementing a sub-par solution, but\u2026\nthe developers also knew that a better solution was out there, which\u2026\nimplies that it can be tidied up relatively simply.\n\nDevelopers writing bad code is a larger and more cultural problem that requires a lot more effort to fix. Hopefully\u2014and usually\u2014bad code is in the minority, but it helps to be objective in identifying and solving it. Bad code usually doesn\u2019t happen for a good enough reason, and is therefore much harder to justify.\nTechnical debt often represents ability in judgement, whereas bad code often represents a gap in skills.\nTakeaway\nTake time to familiarise yourself with the true concepts underlying technical debt and why it exists. Understand that technical debt can be good or bad. Admit that sometimes code is just of poor quality.\nUnderstanding these points will allow you to make better calls around what you might need to refactor and when, and what skills gaps you might have in your team.\n\nSometimes it\u2019s okay to cut corners if there is a tangible gain to be had in the immediate term.\nTechnical debt is okay provided it is a sensible debt and you have intentions to pay it off.\nTechnical debt is not necessarily synonymous with bad code, and bad code isn\u2019t necessarily technical debt. Technical debt is code that was implemented given limited knowledge or resource, with the understanding that you would need to repay something in future.\nTechnical debt is not inherently bad\u2014failure to make repayments is. Periodically, it is justifiable\u2014encouraged, even\u2014to enter a debt in order to fulfil a more pressing matter. However, it is imperative that we begin making repayments as soon as we are capable, be that based on newly available time or knowledge.\nBad code is worse than technical debt as it represents a lack of knowledge or quality control within a team. It needs a much more fundamental fix.", "year": "2016", "author": "Harry Roberts", "author_slug": "harryroberts", "published": "2016-12-05T00:00:00+00:00", "url": "https://24ways.org/2016/we-need-to-talk-about-technical-debt/", "topic": "code"} {"rowid": 302, "title": "Flexible Project Management in Inflexible Environments", "contents": "Handling unforeseen circumstances is an inevitable part of any project. It\u2019s also often the most uncomfortable, and there is no amount of skill or planning that will fully eradicate the need to adapt to change. The ability to be flexible, responsive, and unafraid of facing not only problems, but also potentially positive scope changes and new ideas, isn\u2019t an easy one to master. I am by no means saying that I have, but what I have learned is that there is often the temptation to shut out anything that might derail your plan, even sometimes at the cost of the quality you\u2019re committed to.\nThe reality is that as someone leading a project you know there will be challenges, but, in general, it\u2019s a hassle to try keep the landscape open. Problems are bridges we should cross when we come to them, but intentional changes to the plan, and adapting for the sake of improving your first idea, is harder. There are tight schedules, resource is planned miles ahead, and you\u2019re already juggling twenty other things. If you\u2019re passionate about the quality of work you deliver and are working somewhere that considers itself expert within the field of digital, then having an attitude of flexibility is extremely important. It\u2019s important when you\u2019re overcoming a challenge or problem, but it\u2019s also important for allowing ideas to evolve and be refined as much as they can be throughout the course of a project.\nWhere theory falls short\nThe premise of any Agile methodology, Scrum for example, is based around being able to work efficiently, react quickly and deliver relevant chunks of a product in manageable increments. It\u2019s often hailed as king of flexible management and it can work really well, especially for in-house software products developed over a long or even an indefinite period of time. It holds off defining scope too far ahead and lets teams focus on smaller amounts of work, and allows them to regularly reprioritise. Unfortunately though, not all environments lend themselves as easily to a fully Agile setup. Even the ones that do may be restrained from putting it fully into practice for an array of other internal reasons.\nDelivering digital services to clients\u2014within an agency setting or as a freelancer\u2014often demands a more rigid structure. You need clear sign-off points, there\u2019s a lot less flexibility in defining features, or working within budgets and timeframes. To start with, for a project to warrant a fully Agile team working on it, and especially for agencies, you need clients big enough and rich enough to justify the resource. You also need a lot of client trust to propose defining features and scope as you go. Although this is achievable\u2014and there are agencies that operate an agile setup\u2014it takes a long journey to reach that scale in the full sense of the word. Building a reputation that commands unconditional trust and reaching the point where your projects are consistently of a certain size often requires backing by long journey of success and excellence.\nSo there is a lot of room left for understanding how we can best strive to still deliver excellent projects within more constrained structures. We know that rigid waterfall planning, more often than not, falls over as soon as a project gets anything past a basic brochure site. There are many critiques of the system, but one of the main ones tends to be that nobody considers each other\u2019s work properly, which can result in very expensive and inefficient development.\nEqually, for reasons we\u2019ve already touched upon, running fully agile teams often isn\u2019t the right answer. So many companies, individuals, and organisations look for a middle-ground that balances being flexible and adaptive, but also provides enough upfront commitment to agree budgets, get client/stakeholder sign off, and effectively coordinate internal resource across multiple parallel projects.\nAlthough I don\u2019t have a perfect formula\u2014and can very much assert there is no one perfect way of managing a project because every project is different to the next\u2014I\u2019ve identified a few different ways you can approach flexibility that have really helped me in running projects more smoothly within more realistic constraints.\nPlanned Flexibility\nDrawing on some of the traditional methodologies such as PRINCE2, a good starting point for aspiring to be flexible is by planning for it from the start.\nPlanning flexibility comes in a few forms. For one, you can regularly identify and log potential risks as a generally good, on-going habit over the course of the project. This essentially just involves scanning the horizon for potential blips on a regular basis (for example weekly) by consulting with your team and documenting it somewhere. It means you have a checkpoint when you sit down and make sure you\u2019re minimising what will or may catch you by surprise. A good time to do this is in a weekly catch up meeting. It\u2019s not going to fix all your problems, but it will make sure you have a head start on the ones you can see coming.\nOn the subject of team meetings, setting up recurring project events, including a weekly call, a weekly team meeting and (depending on the size of the project) I like to try also do a stand-up as often as possible. Keeping everyone involved and bought in to a project is going to help you infinitely when you need to spot a problem or manage changes to the plan. It will be the difference between your designer spotting an issue and making a mental note to \u2018tell you later\u2019, and them actually coming over to tell you directly and immediately. Despite the overhead of meetings, and looping people into stages that they aren\u2019t directly responsible for, the business benefits are chances for success are drastically increased. Planning in, and being aware of how important your team is, will help you be flexible.\nBuilding contingency (formally know as slack) into your project plan from the word go is another well-known and essential way of planning to be flexible. Your project plan will change a lot over the course of a project, but there are still the days that you estimate a job will take, and the days you should actually plan in. Most sensible management teams understand that budgets need to be agreed with this slack in mind or you will not be able to deliver a quality service. I believe that commercial awareness is one of the most valuable skills a project manager can have, but penny pinching will ruin client and team relationships, destroy buy-in and creativity, and often end you up with a much more expensive, hacky, and resented product.\nIt\u2019s not a justification to let budgets spiral out of control, but a way of thinking about the bigger picture and wider plan of the company itself. It\u2019s unlikely you want high staff turnover because everyone fell out while you were screaming money at them and they didn\u2019t feel like they could do a good job. It\u2019s also unlikely that you will be able to deliver quality products, which will win you a strong reputation and subsequently bigger and better projects. Evaluating risk factors and building in the right amount of slack from the start will give you more wriggle room when you need to adapt and react. On the flip side, also keeping an overview of the wider workload (that you\u2019re not necessarily responsible for), and knowing who to talk if resource is becoming free or needs filling, is another handy way of being able to react quickly and ensuring your management system is respected. You want pockets of backup time planned in, but you also want everyone being as productive as they can most of the time. Never run at 100% capacity: as soon as something does need to change, you\u2019re left with nowhere to move.\nTransparency\nHaving a client or stakeholder that trusts you is a really powerful aid in any regard, but especially so when you need to communicate an issue or new suggestion. Positioning yourself and your team as experts and taking the time to delve into the wider picture\u2014and the goals surrounding your client\u2019s reasons to commission the project in the first place\u2014will make you more valuable to them. Clients and stakeholders will always be different, and sometimes you will get people who are just plain difficult, but more often than not people will listen if you\u2019re willing to talk and explain things.\nAs I\u2019m sure all of us have realised at one point or another, a lot of people think they know what they want, and it\u2019s usually the wrong thing. Managing key stakeholders in your project is arguably your biggest challenge, if they are on the your side and feel like the team is genuinely working to give them something of quality and value, then they will make your job easier. It\u2019s often down to you to educate them, and to help them recognise and understand the work involved and you and your team\u2019s reasoning behind your decisions.\nBeing overly submissive or overly secretive will foster a dynamic in which they feel expected to steer the project. In this situation they may not respect the team\u2019s suggestions or may come up with some unreasonable and counterproductive ideas that are likely to hinder progress and lower morale. Getting the stakeholder on board and making them feel a part of the wider picture will make things easier. Pushing back and challenging ideas or working hard to justify something they don\u2019t quite understand will often work in your favour and protects your team. On quite a basic level it also shows you care and are invested; on another, it shows you feel confident in your expertise within your field and that is ultimately the reason they hired you.\nTaking the time to think about and be aware of this relationship, will make it easier to be flexible and handle new ideas or suggestions that pop up as the project goes along. Change doesn\u2019t need to be \u2018scope creep\u2019 if it\u2019s raised in a practical, value-orientated, and level headed discussion. There is usually a way forward for new ideas, as long as they\u2019re valuable and support the wider goals. Maybe the deadline gets pushed back, maybe you get more budget, maybe the client is happy to forgo something else. As long as there\u2019s value and reason, it shows integrity to the project and respect for its success. You can\u2019t expect for this to go smoothly without having invested in the client relationship, so it\u2019s a large point in paving the way to handling change well.\nReactive Flexibility\nFinally, if you\u2019ve been doing this for a while, you\u2019ll know by now that you can\u2019t anticipate everything. Sometimes you will have to react and change the plan under circumstances that aren\u2019t easy. When an unexpected problem first rears its head\u2014a client\u2019s casual afterthought that\u2019s threatening the scope of the project, an internal resource conflict, a junior member of staff that\u2019s not grasping the ropes quite as quickly as you\u2019d hoped\u2014you have to react quickly.\nIn his book, \u2018Pitch Anything\u2019, Oren Klaff talks about people\u2019s first reactions being processed by their \u2018crocodile brain\u2019 before they\u2019ve had a chance to refine and digest the information more intelligibly. As project managers, product owners, or scrum masters, it\u2019s natural for our immediate reactions to an unexpected problem to cause a pang of stress. But after that initial jolt you need to turn to practical solutions and start racking your brain for different ways forward. It\u2019s here you need to remember to not let your imagination get the better of you, especially if you\u2019ve been putting in the legwork with your team and your client. There is always a way forward and moments like this can be a good opportunity to develop your negotiation and diplomacy skills. Don\u2019t let your immediate reaction be shutting the problem down; instead, take a second to think about it before you decide on the best direction. In a stressful situation, your first idea probably won\u2019t be your best one.\nFrom an internal point of view, it\u2019s very important that whatever went wrong doesn\u2019t turn into a finger pointing exercise and you don\u2019t lose your cool. Getting caught up in a blame game or a witch hunt is never productive. Relationship cultivating can sometimes be the pillar that gets you through a stressful blip. Biggest tip for staying flexible when you\u2019re reacting to a problem\u2014apart form obviously thinking of ways forward\u2014is to communicate. Don\u2019t go quiet until you feel like you have a plan, you\u2019ll often need to put everyone else at ease before you can move things forward. Problem solving is part of the job and will need to happen in even the most flexible of product delivery systems.\nIn conclusion, being flexible is never simple but there are things you can do to make your life easier. Owning a position of expertise, putting together a team that\u2019s involved in each other\u2019s work and cultivating a client/stakeholder relationship that\u2019s as transparent and respectful as possible will get you a long way. In times of crisis, believe in your skills and be open to adapting over getting frustrated.", "year": "2016", "author": "Gillian Sibthorpe", "author_slug": "gilliansibthorpe", "published": "2016-12-04T00:00:00+00:00", "url": "https://24ways.org/2016/flexible-project-management/", "topic": "process"} {"rowid": 301, "title": "Stretching Time", "contents": "Time is valuable. It\u2019s a precious commodity that, if we\u2019re not too careful, can slip effortlessly through our fingers. When we think about the resources at our disposal we\u2019re often guilty of forgetting the most valuable resource we have to hand: time.\nWe are all given an allocation of time from the time bank. 86,400 seconds a day to be precise, not a second more, not a second less.\nIt doesn\u2019t matter if we\u2019re rich or we\u2019re poor, no one can buy more time (and no one can save it). We are all, in this regard, equals. We all have the same opportunity to spend our time and use it to maximum effect. As such, we need to use our time wisely.\nI believe we can \u2018stretch\u2019 time, ensuring we make the most of every second and maximising the opportunities that time affords us.\nThrough a combination of \u2018Structured Procrastination\u2019 and \u2018Focused Finishing\u2019 we can open our eyes to all of the opportunities in the world around us, whilst ensuring that we deliver our best work precisely when it\u2019s required. A win win, I\u2019m sure you\u2019ll agree.\nStructured Procrastination\nI\u2019m a terrible procrastinator. I used to think that was a curse \u2013 \u201cWhy didn\u2019t I just get started earlier?\u201d \u2013 over time, however, I\u2019ve started to see procrastination as a valuable tool if it is used in a structured manner.\nDon Norman refers to procrastination as \u2018late binding\u2019 (a term I\u2019ve happily hijacked). As he argues, in Why Procrastination Is Good, late binding (delay, or procrastination) offers many benefits:\n\nDelaying decisions until the time for action is beneficial\u2026 it provides the maximum amount of time to think, plan, and determine alternatives.\n\nWe live in a world that is constantly changing and evolving, as such the best time to execute is often \u2018just in time\u2019. By delaying decisions until the last possible moment we can arrive at solutions that address the current reality more effectively, resulting in better outcomes.\nProcrastination isn\u2019t just useful from a project management perspective, however. It can also be useful for allowing your mind the space to wander, make new discoveries and find creative connections. By embracing structured procrastination we can \u2018prime the brain\u2019.\nAs James Webb Young argues, in A Technique for Producing Ideas, all ideas are made of other ideas and the more we fill our minds with other stimuli, the greater the number of creative opportunities we can uncover and bring to life.\nBy late binding, and availing of a lack of time pressure, you allow the mind space to breathe, enabling you to uncover elements that are important to the problem you\u2019re working on and, perhaps, discover other elements that will serve you well in future tasks.\nWhen setting forth upon the process of writing this article I consciously set aside time to explore. I allowed myself the opportunity to read, taking in new material, safe in the knowledge that what I discovered \u2013 if not useful for this article \u2013 would serve me well in the future. \nRon Burgundy summarises this neatly:\n\nProcrastinator? No. I just wait until the last second to do my work because I will be older, therefore wiser.\n\nAn \u2018older, therefore wiser\u2019 mind is a good thing. We\u2019re incredibly fortunate to live in a world where we have a wealth of information at our fingertips. Don\u2019t waste the opportunity to learn, rather embrace that opportunity. Make the most of every second to fill your mind with new material, the rewards will be ample.\nDeadlines are deadlines, however, and deadlines offer us the opportunity to focus our minds, bringing together the pieces of the puzzle we found during our structured procrastination.\nLike everyone I\u2019ll hear a tiny, but insistent voice in my head that starts to rise when the deadline is approaching. The older you get, the closer to the deadline that voice starts to chirp up.\nAt this point we need to focus.\nFocused Finishing\nWe live in an age of constant distraction. Smartphones are both a blessing and a curse, they keep us connected, but if we\u2019re not careful the constant connection they provide can interrupt our flow.\nWhen a deadline is accelerating towards us it\u2019s important to set aside the distractions and carve out a space where we can work in a clear and focused manner.\nWhen it\u2019s time to finish, it\u2019s important to avoid context switching and focus. All those micro-interactions throughout the day \u2013 triaging your emails, checking social media and browsing the web \u2013 can get in the way of you hitting your deadline. At this point, they\u2019re distractions.\nChunking tasks and managing when they\u2019re scheduled can improve your productivity by a surprising order of magnitude. At this point it\u2019s important to remove distractions which result in \u2018attention residue\u2019, where your mind is unable to focus on the current task, due to the mental residue of other, unrelated tasks.\nBy focusing on a single task in a focused manner, it\u2019s possible to minimise the negative impact of attention residue, allowing you to maximise your performance on the task at hand.\nCal Newport explores this in his excellent book, Deep Work, which I would highly recommend reading. As he puts it:\n\nEfforts to deepen your focus will struggle if you don\u2019t simultaneously wean your mind from a dependence on distraction.\n\nTo help you focus on finishing it\u2019s helpful to set up a work-focused environment that is purposefully free from distractions. There\u2019s a time and a place for structured procrastination, but \u2013 equally \u2013 there\u2019s a time and a place for focused finishing.\nThe French term \u2018mise en place\u2019 is drawn from the world of fine cuisine \u2013 I discovered it when I was procrastinating \u2013 and it\u2019s applicable in this context. The term translates as \u2018putting in place\u2019 or \u2018everything in its place\u2019 and it refers to the process of getting the workplace ready before cooking.\nJust like a professional chef organises their utensils and arranges their ingredients, so too can you.\nThanks to the magic of multiple users on computers, it\u2019s possible to create a separate user on your computer \u2013 without access to email and other social tools \u2013 so that you can switch to that account when you need to focus and hit the deadline.\nAnother, less technical way of achieving the same result \u2013 depending, of course, upon your line of work \u2013 is to close your computer and find some non-digital, unconnected space to work in.\nThe goal is to carve out time to focus so you can finish. As Newport states:\n\nIf you don\u2019t produce, you won\u2019t thrive \u2013 no matter how skilled or talented you are.\n\nProcrastination is fine, but only if it\u2019s accompanied by finishing. Create the space to finish and you\u2019ll enjoy the best of both worlds.\nIn closing\u2026\nThere is a time and a place for everything: there is a time to procrastinate, and a time to focus. To truly reap the rewards of time, the mind needs both.\nBy combining the processes of \u2018Structured Procrastination\u2019 and \u2018Focused Finishing\u2019 we can make the most of our 86,400 seconds a day, ensuring we are constantly primed to make new discoveries, but just as importantly, ensuring we hit the all-important deadlines.\nMake the most of your time, you only get so much. Use every second productively and you\u2019ll be thankful that you did. Don\u2019t waste your time, once it\u2019s gone, it\u2019s gone\u2026 and you can never get it back.", "year": "2016", "author": "Christopher Murphy", "author_slug": "christophermurphy", "published": "2016-12-21T00:00:00+00:00", "url": "https://24ways.org/2016/stretching-time/", "topic": "process"} {"rowid": 300, "title": "Taking Device Orientation for a Spin", "contents": "When The Police sang \u201cDon\u2019t Stand So Close To Me\u201d they weren\u2019t talking about using a smartphone to view a panoramic image on Facebook, but they could have been. For years, technology has driven relentlessly towards devices we can carry around in our pockets, and now that we\u2019re there, we\u2019re expected to take the thing out of our pocket and wave it around in front of our faces like a psychotic donkey in search of its own dangly carrot.\nBut if you can\u2019t beat them, join them.\nA brave new world\nA couple of years back all sorts of specs for new HTML5 APIs sprang up much to our collective glee. Emboldened, we ran a few tests and found they basically didn\u2019t work in anything and went off disheartened into the corner for a bit of a sob.\nTurns out, while we were all busy boohooing, those browser boffins have actually being doing some work, and lo and behold, some of these APIs are even half usable. Mostly literally half usable\u2014we\u2019re still talking about browsers, after all.\nNow, of course they\u2019re all a bit JavaScripty and are going to involve complex methods and maths and science and probably about a thousand dependancies from Github that will fall out of fashion while we\u2019re still trying to locate the documentation, right? Well, no! \nSo what if we actually wanted to use one of these APIs, say to impress our friends with our ability to make them wave their phones in front of their faces (because no one enjoys looking hapless more than the easily-technologically-impressed), how could we do something like that? Let\u2019s find out.\nThe Device Orientation API\nThe phone-wavy API is more formally known as the DeviceOrientation Event Specification. It does a bunch of stuff that basically doesn\u2019t work, but also gives us three values that represent orientation of a device (a phone, a tablet, probably not a desktop computer) around its x, y and z axes. You might think of it as pitch, roll and yaw if you like to spend your weekends wearing goggles and a leather hat.\nThe main way we access these values is through an event listener, which can inform our code every time the value changes. Which is constantly, because you try and hold a phone still and then try and hold the Earth still too.\nThe API calls those pitch, roll and yaw values alpha, beta and gamma. Chocks away:\nwindow.addEventListener('deviceorientation', function(e) {\n console.log(e.alpha);\n console.log(e.beta);\n console.log(e.gamma);\n});\nIf you look at this test page on your phone, you should be able to see the numbers change as you twirl the thing around your body like the dance partner you never had. Wrist strap recommended.\nOne important note\nLike may of these newfangled APIs, Device Orientation is only available over HTTPS. We\u2019re not allowed to have too much fun without protection, so make sure that you\u2019re working on a secure line. I\u2019ve found a quick and easy way to share my local dev environment over TLS with my devices is to use an ngrok tunnel.\nngrok http -host-header=rewrite mylocaldevsite.dev:80\nngrok will then set up a tunnel to your dev site with both HTTP and HTTPS URL options. You, of course, want the HTTPS option.\nRight, where were we?\nMake something to look at\nIt\u2019s all well and good having a bunch of numbers, but they\u2019re no use unless we do something with them. Something creative. Something to inspire the generations. Or we could just build that Facebook panoramic image viewer thing (because most of us are familiar with it and we\u2019re not trying to be too clever here). Yeah, let\u2019s just build one of those.\nOur basic framework is going to be similar to that used for an image carousel. We have a container, constrained in size, and CSS overflow property set to hidden. Into this we place our wide content and use positioning to move the content back and forth behind the \u2018window\u2019 so that the part we want to show is visible.\nHere it is mocked up with a slider to set the position. When you release the slider, the position updates. (This actually tests best on desktop with your window slightly narrowed.)\nThe details of the slider aren\u2019t important (we\u2019re about to replace it with phone-wavy goodness) but the crucial part is that moving the slider results in a function call to position the image. This takes a percentage value (0-100) with 0 being far left and 100 being far right (or \u2018alt-nazi\u2019 or whatever).\nvar position_image = function(percent) {\n var pos = (img_W / 100)*percent;\n img.style.transform = 'translate(-'+pos+'px)'; \n};\nAll this does is figure out what that percentage means in terms of the image width, and set the transform: translate(\u2026); CSS property to move the image. (We use translate because it might be a bit faster to animate than left/right positioning.)\nOk. We can now read the orientation values from our device, and we can programatically position the image. What we need to do is figure out how to convert those raw orientation values into a nice tidy percentage to pass to our function and we\u2019re done. (We\u2019re so not done.)\nThe maths bit\nIf we go back to our raw values test page and make-believe that we have a fascinating panoramic image of some far-off beach or historic monument to look at, you\u2019ll note that the main value that is changing as we swing back and forth is the \u2018alpha\u2019 value. That\u2019s the one we want to track.\nAs our goal here is hey, these APIs are interesting and fun and not let\u2019s build the world\u2019s best panoramic image viewer, we\u2019ll start by making a few assumptions and simplifications:\n\nWhen the image loads, we\u2019ll centre the image and take the current nose-forward orientation reading as the middle.\nMoving left, we\u2019ll track to the left of the image (lower percentage).\nMoving right, we\u2019ll track to the right (higher percentage).\nIf the user spins round, does cartwheels or loads the page then hops on a plane and switches earthly hemispheres, they\u2019re on their own.\n\nNose-forward\nWhen the page loads, the initial value of alpha gives us our nose-forward position. In Safari on iOS, this is normalised to always be 0, whereas most everywhere else it tends to be bound to pointy-uppy north. That doesn\u2019t really matter to us, as we don\u2019t know which direction the user might be facing in anyway \u2014 we just need to record that initial state and then use it to compare any new readings.\nvar initial_position = null;\n\nwindow.addEventListener('deviceorientation', function(e) {\n if (initial_position === null) {\n initial_position = Math.floor(e.alpha);\n };\n\n var current_position = initial_position - Math.floor(e.alpha);\n});\n(I\u2019m rounding down the values with Math.floor() to make debugging easier - we\u2019ll take out the rounding later.)\nWe get our initial position if it\u2019s not yet been set, and then calculate the current position as a difference between the new value and the stored one.\nThese values are weird\nOne thing you need to know about these values, is that they range from 0 to 360 but then you also get weird left-of-zero values like -2 and whatever. And they wrap past 360 back to zero as you\u2019d expect if you do a forward roll.\nWhat I\u2019m interested in is working out my rotation. If 0 is my nose-forward position, I want a positive value as I turn right, and a negative value as I turn left. That puts the awkward 360-tipping point right behind the user where they can\u2019t see it.\nvar rotation = current_position;\nif (current_position > 180) rotation = current_position-360;\nWhich way up?\nSince we\u2019re talking about orientation, we need to remember that the values are going to be different if the device is held in portrait on landscape mode. See for yourself - wiggle it like a steering wheel and you get different values. That\u2019s easy to account for when you know which way up the device is, but in true browser style, the API for that bit isn\u2019t well supported. The best I can come up with is:\nvar screen_portrait = false;\nif (window.innerWidth < window.innerHeight) {\n screen_portrait = true;\n}\nIt works. Then we can use screen_portrait to branch our code:\nif (screen_portrait) {\n if (current_position > 180) rotation = current_position-360;\n} else {\n if (current_position < -180) rotation = 360+current_position;\n}\nHere\u2019s the code in action so you can see the values for yourself. If you change screen orientation you\u2019ll need to refresh the page (it\u2019s a demo!).\nLimiting rotation\nNow, while the youth of today are rarely seen without a phone in their hands, it would still be unreasonable to ask them to spin through 360\u00b0 to view a photo. Instead, we need to limit the range of movement to something like 60\u00b0-from-nose in either direction and normalise our values to pan the entire image across that 120\u00b0 range. -60 would be full-left (0%) and 60 would be full-right (100%).\nIf we set max_rotation = 60, that code ends up looking like this:\nif (rotation > max_rotation) rotation = max_rotation;\nif (rotation < (0-max_rotation)) rotation = 0-max_rotation;\n\nvar percent = Math.floor(((rotation + max_rotation)/(max_rotation*2))*100);\nWe should now be able to get a rotation from -60\u00b0 to +60\u00b0 expressed as a percentage. Try it for yourself.\nThe big reveal\nAll that\u2019s left to do is pass that percentage to our image positioning function and would you believe it, it might actually work.\nposition_image(percent);\nYou can see the final result and take it for a spin. Literally.\nSo what have we made here? Have we built some highly technical panoramic image viewer to aid surgeons during life-saving operations using only JavaScript and some slightly questionable mathematics? No, my friends, we have not. Far from it. \nWhat we have made is progress. We\u2019ve taken a relatively newly available hardware API and a bit of simple JavaScript and paired it with existing CSS knowledge and made something that we didn\u2019t have this morning. Something we probably didn\u2019t even want this morning. Something that if you take a couple of steps back and squint a bit might be a prototype for something vaguely interesting. But more importantly, we\u2019ve learned that our browsers are just a little bit more capable than we thought.\nThe web platform is maturing rapidly. There are new, relatively unexplored APIs for doing all sorts of crazy thing that are often dismissed as the preserve of native apps. Like some sort of app marmalade. Poppycock. \nThe web is an amazing, exciting place to create things. All it takes is some base knowledge of the fundamentals, a creative mind and a willingness to learn. We have those! So let\u2019s create things.", "year": "2016", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2016-12-24T00:00:00+00:00", "url": "https://24ways.org/2016/taking-device-orientation-for-a-spin/", "topic": "code"} {"rowid": 299, "title": "What the Heck Is Inclusive Design?", "contents": "Naming things is hard. And I don\u2019t just mean CSS class names and JSON properties. Finding the right term for what we do with the time we spend awake and out of bed turns out to be really hard too.\nI\u2019ve variously gone by \u201cfront-end developer\u201d, \u201cuser experience designer\u201d, and \u201caccessibility engineer\u201d, all clumsy and incomplete terms for labeling what I do as an\u2026 erm\u2026 see, there\u2019s the problem again.\nIt\u2019s tempting to give up entirely on trying to find the right words for things, but this risks summarily dispensing with thousands of years spent trying to qualify the world around us. So here we are again.\nRecently, I\u2019ve been using the term \u201cinclusive design\u201d and calling myself an \u201cinclusive designer\u201d a lot. I\u2019m not sure where I first heard it or who came up with it, but the terminology feels like a good fit for the kind of stuff I care to do when I\u2019m not at a pub or asleep.\nThis article is about what I think \u201cinclusive design\u201d means and why I think you might like it as an idea.\nIsn\u2019t \u2018inclusive design\u2019 just \u2018accessibility\u2019 by another name?\nNo, I don\u2019t think so. But that\u2019s not to say the two concepts aren\u2019t related. Note the \u2018design\u2019 part in \u2018inclusive design\u2019 \u2014 that\u2019s not just there by accident. Inclusive design describes a design activity; a way of designing things.\nThis sets it apart from accessibility \u2014 or at least our expectations of what \u2018accessibility\u2019 entails. Despite every single accessibility expert I know (and I know a lot) recommending that accessibility should be integrated into design process, it is rarely ever done. Instead, it is relegated to an afterthought, limiting its effect.\nThe term \u2018accessibility\u2019 therefore lacks the power to connote design process. It\u2019s not that we haven\u2019t tried to salvage the term, but it\u2019s beginning to look like a lost cause. So maybe let\u2019s use a new term, because new things take new names. People get that.\nThe \u2018access\u2019 part of accessibility is also problematic. Before we get ahead of ourselves, I don\u2019t mean access is a problem \u2014 access is good, and the more accessible something is the better. I mean it\u2019s not enough by itself.\nImagine a website filled with poorly written and lackadaisically organized information, including a bunch of convoluted and confusing functionality. To make this site accessible is to ensure no barriers prevent people from accessing the content. \nBut that doesn\u2019t make the content any better. It just means more people get to suffer it. \nWhoopdidoo.\nAccess is certainly a prerequisite of inclusion, but accessibility compliance doesn\u2019t get you all the way there. It\u2019s possible to check all the boxes but still be left with an unusable interface. And unusable interfaces are necessarily inaccessible ones. Sure, you can take an unusable interface and make it accessibility compliant, but that only placates stakeholders\u2019 lawyers, not users. Users get little value from it.\nSo where have we got to? Access is important, but inclusion is bigger than access. Inclusive design means making something valuable, not just accessible, to as many people as we can.\nSo inclusive design is kind of accessibility + UX?\nCloser, but there are some problems with this definition.\nUX is, you will have already noted, a broad term encompassing activities ranging from conducting research studies to optimizing the perceived affordance of interface elements. But overall, what I take from UX is that it\u2019s the pursuit of making interfaces understandable.\nAs it happens, WCAG 2.0 already contains an \u2018Understandable\u2019 principle covering provisions such as readability, predictability and feedback. So you might say accessibility \u2014 at least as described by WCAG \u2014 already covers UX.\nUnfortunately, the criteria are limited, plus some really important stuff (like readability) is relegated to the AAA level; essentially \u201cbonus points if you get the time (you won\u2019t).\u201d\nSo better to let UX folks take care of this kind of thing. It\u2019s what they do. Except, therein lies a danger. UX professionals don\u2019t tend to be well versed in accessibility, so their \u2018solutions\u2019 don\u2019t tend to work for that many people. My friend Billy Gregory coined the term SUX, or \u201cSome UX\u201d: if it doesn\u2019t work for different users, it\u2019s only doing part of the job it should be. \nSUX won\u2019t do, but it\u2019s not just a disability issue. All sorts of user circumstances go unchecked when you\u2019re shooting straight for what people like, and bypassing what people need: device type, device settings, network quality, location, native language, and available time to name just a few.\nIn short, inclusive design means designing things for people who aren\u2019t you, in your situation. In my experience, mainstream UX isn\u2019t very good at that. By bolting accessibility onto mainstream UX we labor under the misapprehension that most people have a \u2018normal\u2019 experience, a few people are exceptions, and that all of the exceptions pertain to disability directly.\nSo inclusive design isn\u2019t really about disability?\nIt is about disability, but not in the same way as accessibility. Accessibility (as it is typically understood, anyway) aims to make sure things work for people with clinically recognized disabilities. Inclusive design aims to make sure things work for people, not forgetting those with clinically recognized disabilities. A subtle, but not so subtle, difference.\nLet\u2019s go back to discussing readability, because that\u2019s a good example. Now: everyone benefits from readable text; text with concise sentences and widely-understood words. It certainly helps people with cognitive impairments, but it doesn\u2019t hinder folks who have less trouble with comprehension. In fact, they\u2019ll more than likely be thankful for the time saved and the clarity. Readable text covers the whole gamut. It\u2019s \u2014 you\u2019ve got it \u2014 inclusive.\nLegibility is another one. A clear, well-balanced typeface makes the reading experience less uncomfortable and frustrating for all concerned, including those who have various forms of visual dyslexia. Again, everyone\u2019s happy \u2014 so why even contemplate a squiggly, sketchy typeface? Leave well alone.\nContrast too. No one benefits from low contrast; everyone benefits from high contrast. Simple. There\u2019s no more work involved, it just entails better decision making. And that\u2019s what design is really: decision making.\nHow about zoom support? If you let your users pinch zoom on their phones they can compensate for poor eyesight, but they can also increase the touch area of controls, inspect detail in images, and compose better screen shots. Unobtrusively supporting options like zoom makes interfaces much more inclusive at very little cost.\nAnd when it comes to the underlying HTML code, you\u2019re in luck: it has already been designed, from the outset, to be inclusive. HTML is a toolkit for inclusion. Using the right elements for the job doesn\u2019t just mean the few who use screen readers benefit, but keyboard accessibility comes out-of-the-box, you can defer to browser behavior rather than writing additional scripts, the code is easier to read and maintain, and editors can create content that is effortlessly presentable. \nWait\u2026 are you talking about universal design?\nHmmm. Yes, I guess some folks might think of \u201cuniversal design\u201d and \u201cinclusive design\u201d as synonymous. I just really don\u2019t like the term universal in this context. \nThe thing is, it gives the impression that you should be designing for absolutely everyone in the universe. Though few would adopt a literal interpretation of \u201cuniversal\u201d in this context, there are enough developers who would deliberately misconstrue the term and decry universal design as an impossible task. I\u2019ve actually had people push back by saying, \u201cwhat, so I\u2019ve got to make it work for people who are allergic to computers? What about people in comas?\u201d\nFor everyone\u2019s sake, I think the term \u2018inclusive\u2019 is less misleading. Of course you can\u2019t make things that everybody can use \u2014 it\u2019s okay, that\u2019s not the aim. But with everything that\u2019s possible with web technologies, there\u2019s really no need to exclude people in the vast numbers that we usually are. \nAccessibility can never be perfect, but by thinking inclusively from planning, through prototyping to production, you can cast a much wider net. That means more and happier users at very little if any more effort.\nIf you like, inclusive design is the means and accessibility is the end \u2014 it\u2019s just that you get a lot more than just accessibility along the way.\nConclusion\nThat\u2019s inclusive design. Or at least, that\u2019s a definition for a thing I think is a good idea which I identify as inclusive design. I\u2019ll leave you with a few tips.\nInvolve code early\nWeb interfaces are made of code. If you\u2019re not working with code, you\u2019re not working on the interface. That\u2019s not to say there\u2019s anything wrong with sketching or paper prototyping \u2014 in fact, I recommend paper prototyping in my book on inclusive design. Just work with code as soon as you can, and think about code even before that. Maintain a pattern library of coded solutions and omit any solutions that don\u2019t adhere to basic accessibility guidelines.\nRespect conventions\nYour content should be fresh, inventive, radical. Your interface shouldn\u2019t. Adopt accepted conventions in the appearance, placement and coding of interface elements. Users aren\u2019t there to experience interface design; they\u2019re there to use an interface. In other words: stop showing off (unless, of course, the brief is to experiment with new paradigms in interface design, for an audience of interface design researchers).\nDon\u2019t be exact\n\u201cPerfection is the enemy of good\u201d. But the pursuit of perfection isn\u2019t just to be avoided because nothing ever gets finished. Exacting design also makes things inflexible and brittle. If your design depends on elements retaining precise coordinates, they\u2019ll break easily when your users start adjusting font settings or zooming. Choose not to position elements exactly or give them fixed, \u201cmagic number\u201d dimensions. Make less decisions in the interface so your users can make more decisions for it.\nEnforce simplicity\nThe virtue of simplicity is difficult to overestimate. The simpler an interface is, the easier it is to use for all kinds of users. Simpler interfaces require less code to make too, so there\u2019s an obvious performance advantage. There are many design decisions that require user research, but keeping things simple is always the right thing to do. Not simplified or simple-seeming or simplistic, but simple. \nDo a little and do it well, for as many people as you can.", "year": "2016", "author": "Heydon Pickering", "author_slug": "heydonpickering", "published": "2016-12-07T00:00:00+00:00", "url": "https://24ways.org/2016/what-the-heck-is-inclusive-design/", "topic": "process"} {"rowid": 298, "title": "First Steps in VR", "contents": "The web is all around us. As web folk, it is our responsibility to consider the impact our work can have. Part of this includes thinking about the future; the web changes lives and if we are building the web then we are the ones making decisions that affect people in every corner of the world. I find myself often torn between wanting to make the right decisions, and just wanting to have fun. To fiddle and play. We all know how important it is to sometimes just try ideas, whether they will amount to much or not. \nI think of these two mindsets as production and prototyping, though of course there are lots of overlap and phases in between. I mention this because virtual reality is currently seen as a toy for rich people, and in some ways at the moment it is. But with WebVR we are able to create interesting experiences with a relatively low entry point. I want us to have open minds, play around with things, and then see how we can use the tools we have at our disposal to make things that will help people.\nEvery year we see articles saying it will be the \u201cyear of virtual reality\u201d, that was especially prevalent this year. 2016 has been a year of progress, VR isn\u2019t quite mainstream but with efforts like Playstation VR and Google Cardboard, we are definitely seeing much more of it. This year also saw the consumer editions of the Oculus Rift and HTC Vive. So it does seem to be a good time for an overview of how to get involved with creating virtual reality on the web.\nWebVR is an API for connecting to devices and retrieving continuous data such as the position and orientation. Unlike the Web Audio API and some other APIs, WebVR does not feel like a framework. You use it however you want, taking the data and using it as you wish. To make it easier, there are plenty of resources such as Three.js, A-Frame and ReactVR that help to make the heavy lifting a bit easier.\nGetting Started with A-Frame\nI like taking the opportunity to learn new things whenever I can. So while planning this article I thought that instead of trying to teach WebGL or even Three.js in a way that is approachable for all, I would create my first project using A-Frame and write about that. This is not a tutorial as such, I just want to show how to go about getting involved with VR. The beauty of A-Frame is that it is very similar to web components, you can just write HTML to build worlds that will automatically work on all the different types of devices. It uses WebGL and WebVR but in such a way that it quite drastically reduces the learning curve. That\u2019s not to say you can\u2019t build complex things, you have complete access to write JavaScript and shaders.\nI\u2019m lazy. Whenever I learn a new language or framework I have found that the best way, personally, for me to learn is to have a project and to copy the starting code from someone else. A project lets you have a good idea of what you want to produce and it means you can ignore a lot of the irrelevant documentation, focussing purely on what you need. That reduces the stress of figuring things out. Copying code also makes it easier, because you know your boilerplate code is working. There\u2019s nothing worse than getting stuck before anything actually works the first time. So I tinker. I take code and I modify it, I play around. It\u2019s fun.\nFor this project I wanted to keep things as simple as possible, so I can easily explain it without the classic \u201cdraw a circle then draw an owl\u201d. I wrote a list of requirements, with some stretch goals that you can give a try yourself if you fancy:\n\nMust work on Google Cardboard at a minimum, because of price\nTherefore, it must not rely on having a controller\nAuto-moving around a maze would be a good example\nMove in direction you look\nStretch goal: Scoring, time until you hit a wall or get stuck in maze\nStretch goal: Levels, so the map doesn\u2019t need to be random\nStretch goal: Snow!\n\nI decided to base this project on an example, Platforms, by Don McCurdy who wrote the really useful aframe-extras. Platforms has random 3D blocks that you can jump onto, going up into the sky. So I took his code and reduced it so that the blocks are randomly spread on the ground. \n\n\n\n \n \n 24 ways\n \n \n\n\n\n \n \n\n \n\n \n\n \n\n \n \n\n\n\n\nAs you can see, this is very readable. Especially if you ignore the JavaScript that is used to create the maze. A-Frame (with A-Frame Extras) gives you a lot of power with relatively little to learn. We start with an which is the container for everything that is going to show up on the screen. There are a few which can be compared to
    as they are essentially non-semantic containers, able to be used for any purpose. The attributes are used to define functionality, for example the camera attribute sets the entity to function as a camera and kinematic-body makes it collide instead of go through objects. Attributes are also used to set position and sizes, often using JavaScript to dynamically define them.\nStyling\nNow we\u2019ve got the HTML written, we need to style it. To do this we add A-Frame compatible attributes such as color and material. I recommend playing around, you can get some quite impressive effects fairly easily. Originally I wanted a light snowy maze but it ended up being dark and foggy, as I really liked the feeling it gave.\nNote, you will probably need a server running for images to work. You can do this by running python -m \"SimpleHTTPServer\" in the folder where the code is, then go to localhost:8000 in browser.\nTextures\nUnless you are going for a cartoony style, you probably want to find some textures. I found some on textures.com, one image worked well for the walls and the other for the floor.\n\n \n \n\nThe is used to define (as well as preload and cache) all assets, including images, audio and video. As you can see, images in the Asset Management System just use normal img tags. The ids are important here as we can use them later for using the textures. \nTo apply a texture to an object, you create a material. For a simple material where it just shows the image, you set the src to the id selector of the image.\nReplace: \n\nWith:\n\nThis will automatically make the image repeat over the entire floor, in my case filling it with bricks. The walls are pretty much identical, with the slight exception that it is set in JavaScript as they are dynamically defined.\nbox.setAttribute('material', 'src: #texture-wall');\nThat\u2019s it for the textures, for now at least. These will not look completely realistic, as the light will bump off the rectangular wall rather than texture itself. This can be improved by using maps, textures that are used to modify the shape and physical properties of the object. \nLighting\nThe next part of styling is lighting. By using fog and different types of lighting, we are able to add atmospheric details to the game to make it feel that bit more realistic and polished.\nThere are lots of types of light in A-Frame (most coming from Three.js). You can add a light either by using the entity or by attaching a light attribute to any other entity. If there are no lights defined then A-Frame adds some by default so that the scene is always lit.\nTo start with I wanted to light up the scene with a general light, type=\"ambient\", so that the whole game felt slightly dark. I chose to set the light to a reddish colour #92455E. After playing around with intensity I chose 0.4, it added enough light to get the feeling I wanted without it being overly red. I also added a blue skybox (), as it looked a bit odd with a white sky.\n\n\nI felt that the maze looked good with a red tinge but it was a bit flat, everything was the same colour and it was a bit dark. So I added a light within the #player entity, this could have been as an attribute but I set it as a child a-light instead. By using type=\"point\" with a high intensity and low distance, it showed close walls as being lighter. It also added a sort-of object to the player, it isn\u2019t a walking human or anything but by moving light where the player is it feels a bit more physical.\n\n\nBy this point it was starting to look decent, so I wanted to add the fog to really give some personality and depth to the maze. To do this I added the fog attribute to the with type=exponential so it looks thicker the further away it is and a mid intensity, so you feel a bit lost but can still see.\n\nI was very happy with this result. It took a lot of playing around with colours and values, which is fun in itself. I highly recommend you take the code (or write your own) and play around with the numbers.\nMovement\nOne of the reasons I decided to use aframe-extras is that it has a few different camera controls built in. As you saw earlier, I am using the universal-controls which gives WASD (keyboard) controls by default. I wanted to make it automatically move in the direction that you\u2019re looking, but I wasn\u2019t quite sure how without rewriting the controls. So I asked Don McCurdy for advice and he very nicely gave me a small snippet of code to get it working.\nAFRAME.registerComponent('automove-controls', {\n init: function () {\n this.speed = 0.1;\n this.isMoving = true;\n this.velocityDelta = new THREE.Vector3();\n },\n isVelocityActive: function () {\n return this.isMoving;\n },\n getVelocityDelta: function () {\n this.velocityDelta.z = this.isMoving ? -speed : 0;\n return this.velocityDelta.clone();\n }\n});\nReplace:\nuniversal-controls\nWith:\nuniversal-controls=\"movementControls: automove, gamepad, keyboard\"\nThis works by creating a component automove-controls that adds auto-move to the player without overriding movement completely. It doesn\u2019t even touch direction, it just checks if isMoving is true then moves the player by the set speed. Components can be creating for adding all kinds of functionality with relative ease. It makes it very powerful for people of all difficulty levels.\nBuilding a map\nCurrently the maze is created randomly, which is great but means there will often be walls that overlap or the player gets trapped with nowhere to go. So to solve this, I decided to use a map editor (Tiled) so that we can create the mazes ourselves. This is a great start towards one of the stretch goals, levels.\nI made the maze in Tiled by finding a random tileset online (we don\u2019t need to actually show the images), I used one tile for the wall and another for the player. Then I exported as a JavaScript file and modified it in my text editor to get rid of everything I didn\u2019t need. I made it so 0 is the path, 1 is the wall and 2 is the player. I then added the script to the HTML, as a separate file so it\u2019s easy to update in the future. \nvar map =\n{\n \"data\":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n \"height\":10,\n \"width\":10\n}\n\nAs you can see, this gives a simple 10x10 maze with some dead ends. The player starts in the bottom right corner (my choice, could be anywhere). I rewrote the random platforms code (from Don\u2019s example) to instead loop over the map data and place walls where it is 1 and position the player where data is 2. I set the position so that the origin of the map would be 0,1.5,0. The y axis is in this case the height (ground being 0), but if a wall is positioned at 0 by its centre then some of it is underground. So the y needed to be the height divided by 2.\ndocument.querySelector('a-scene').addEventListener('render-target-loaded', function () {\n var WALL_SIZE = 5,\n WALL_HEIGHT = 3;\n var el = document.querySelector('#walls');\n var wall;\n\n for (var x = 0; x < map.height; x++) {\n for (var y = 0; y < map.width; y++) {\n\n var i = y*map.width + x;\n var position = (x-map.width/2)*WALL_SIZE + ' ' + 1.5 + ' ' + (y-map.height/2)*WALL_SIZE;\n if (map.data[i] === 1) {\n // Create wall\n wall = document.createElement('a-box');\n el.appendChild(wall);\n wall.setAttribute('color', '#fff');\n wall.setAttribute('material', 'src: #texture-wall;');\n wall.setAttribute('width', WALL_SIZE);\n wall.setAttribute('height', WALL_HEIGHT);\n wall.setAttribute('depth', WALL_SIZE);\n wall.setAttribute('position', position);\n wall.setAttribute('static-body', ');\n }\n\n if (map.data[i] === 2) {\n // Set player position\n document.querySelector('#player').setAttribute('position', position);\n }\n\n }\n }\n console.info('Walls added.');\n});\n\nWith this added, it makes it nice and easy to change around the map as well as to add new features. Perhaps you want monsters or objects. Just set the number in the map data and add an if statement to the loop. In the future you could add layers, so multiple things can be in the same position. Or perhaps even make the maze go up the y axis too, with ramps or staircases. There\u2019s a lot you can do with relative ease. As you can see, A-Frame really does reduce the learning curve of 3D and VR on the web.\nIt\u2019s Not All Fun And Games\nA lot of examples of virtual reality are games, including this one. So it is understandable to think that VR is for gaming, but actually that\u2019s just a tiny subset. There are all sorts of applications for VR, including story telling, data visualisation and even meditation.\nThere have been a number of cases where it has been shown virtual reality can help as a tool for therapies:\n\nOxford study finds virtual reality can help treat severe paranoia\nVirtual Reality Therapy for Phobias at the Duke Faculty Practice\nBravemind: Virtual Reality Exposure Therapy at the University of Southern California\n\nThese are just a few examples of where virtual reality is being used around the world to help people feel better and get through some very tough times. There have also been examples of it being used for simulating war zones or medical situations, both as a teaching and journalism tool.\nWrapping Up\nTen years ago, on this very site, Cameron Moll wrote an article explaining the mobile web. He explained how mobile phones with data plans were becoming increasingly common, that WAP 2.0 included the XHTML Mobile Profile meaning it would be familiar with web folk. \u201cThe mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing \u201cdesktop web\u201d skills to understand how to develop content for it.\u201d\nWe can look at that and laugh a little, we have come a very long way in the last decade. Even people in developing countries with very little money have mobile phones with access to a web that is far more capable than the \u201cdesktop web\u201d Cameron was referring to.\nSo while I am not saying virtual reality is going to change the world or replace our phones, who knows! We can use our skills as web folk to dabble, we don\u2019t need to learn any new languages. If on the 2026 edition of 24 ways, somebody references this article and looks at how far we have come\u2026 well, let\u2019s hope we have used our skills well and made the world just that little bit better. And if VR is a fad? Well it\u2019s fun\u2026 have a go anyway.", "year": "2016", "author": "Shane Hudson", "author_slug": "shanehudson", "published": "2016-12-11T00:00:00+00:00", "url": "https://24ways.org/2016/first-steps-in-vr/", "topic": "code"} {"rowid": 297, "title": "Public Speaking with a Buddy", "contents": "My book Demystifying Public Speaking focuses on the variety of fears we each have about giving a talk. From presenting to a client, to leading a team standup, to standing on a conference stage, there are lots of things we can do to prepare ourselves for the spotlight and reduce those fears.\nThough it didn\u2019t make it into the final draft, I wanted to highlight how helpful it can be to share that public speaking spotlight with another person, or a few more people. If you have fears about not knowing the answer to a question, fumbling your words, or making a mistake in the spotlight, then buddying up may be for you!\n\nTo some, adding more people to a presentation sounds like a recipe for on-stage disaster. To others, having a friendly face nearby\u2014a partner who can step in if you fumble\u2014is incredibly reassuring. As design director Yesenia Perez-Cruz writes, \u201cWhile public speaking is a deeply personal activity, you don\u2019t have to go it alone. Nothing has helped my speaking career more than turning it into a group effort.\u201d\nCo-presenting can level up a talk in two ways: an additional brain and presentation skill set can improve the content of the talk itself, and you may feel safer with the on-stage safety net of your buddy. \nFor example, when I started giving lengthy workshops about building mobile device labs with my co-worker Destiny Montague, we brought different experience to the table. I was able to talk about the user experience of our lab, and the importance of testing across different screen sizes. Destiny spoke about the hardware aspects of the lab, like power consumption and networking. Our audience benefitted from the spectrum of insight we included in the talk.\nMoreover, Destiny and I kept each other energized and engaging while teaching our audience, having way more fun onstage. Partnering up alleviated the risk (and fear!) of fumbling; where one person makes a mistake, the other person is right there to help. Buddy presentations can be helpful if you fear saying \u201cI don\u2019t know\u201d to a question, as there are other people around you who will be able to help answer it from the stage. By partnering with someone whom I trust and respect, and whose work and knowledge augments my own, it made the experience\u2014and the presentation!\u2014significantly better.\nCo-presenting won\u2019t work if you don\u2019t trust the person you\u2019re onstage with, or if you don\u2019t have good chemistry working together. It might also not work if there\u2019s an imbalance of responsibilities, both in preparing the talk and giving it. Read on for how to make partner talks work to your advantage!\nTrustworthiness\nIf you want to explore co-presenting, make sure that your presentation partner is trustworthy and can carry their weight; it can be stressful if you find yourself trying to meet deadlines and prepare well and your partner isn\u2019t being helpful. We\u2019re all about reducing the fears and stress levels surrounding being in that spotlight onstage; make sure that the person you\u2019re relying on isn\u2019t making the process harder.\nBefore you start working together, sketch out the breakdown of work and timeline you\u2019re each committing to. Have a conversation about your preferred work style so you each have a concrete understanding of the best ways to communicate (in what medium, and how often) and how to check in on each other\u2019s progress without micromanaging or worrying about radio silence. Ask your buddy how they prefer to receive feedback, and give them your own feedback preferences, so neither of you are surprised or offended when someone\u2019s work style or deliverable needs to be tweaked.\nThis should be a partnership in which you both feel supported; it\u2019s healthy to set all these expectations up front, and create a space in which you can each tweak things as the work progresses.\nTalk flow and responsibilities\nThere are a few different ways to organize the structure of your talk with multiple presenters. Start by thinking about the breakdown of the talk content\u2014are there discrete parts you and the other presenters can own or deliver? Or does it feel more appropriate to deliver the entirety of the content together?\nIf you\u2019re finding that you can break down the content into discrete chunks, figure out who should own which pieces, and what ownership means. Will you develop the content together but have only one person present the information? Or will one person research and prepare each content section in addition to delivering it solo onstage?\nRehearse how handoffs will go between sections so it feels natural, rather than stilted. I like breaking a presentation into \u201cchapters\u201d when I\u2019m passionate about particular aspects of a topic and can speak on those, but know that there are other aspects to be shared and there\u2019s someone else who can handle (and enjoy!) talking about them. When Destiny and I rehearsed our \u201cchapter\u201d handoffs, we developed little jingles that we\u2019d both sing together onstage; it indicated to the audience that it was a planned transition in the content, and tied our independent work together into a partnership.\n.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }\n\n\n\nAlternatively, you can give the presentation in a way that\u2019s close to having a rehearsed conversation, rather than independently presenting discrete parts of the talk. In this case, you\u2019ll both be sharing the spotlight at the same time, throughout the duration of the talk. Preparation is key, here, to make sure that you each understand what needs to be communicated, and you have a sense of who will be taking responsibility for communicating those different pieces of information. A poorly-prepared talk like this will look like the co-presenters are talking over each other, or hesitating awkwardly to give the other person more room to speak; the audience will feel how uncomfortable this is, and will probably be distracted from the talk content. Practice the talk the whole way through multiple times so you know what each person is planning on covering and how you want to interact with each other while you\u2019re both holding microphones; also figure out how you\u2019ll be standing in relation to each other. More on that next!\nSharing the stage\nIf you choose to give a talk with a partner, determine ahead of time how you\u2019ll stand (or sit). For example, if you each take \u201cchapters\u201d or major sections of the presentation, ensure that it\u2019s clear who the audience should focus their attention on. You could sit in a chair off to the side (or stand). I recommend placing yourself far enough away that you\u2019re not distracting to the audience; you don\u2019t want them watching you while your partner is speaking. If the audience can still see you, but their focus should be on your buddy, be sure to not look distracted; keep your eyes on your buddy, and don\u2019t just open your laptop and ignore what\u2019s happening! Feel free to smile, laugh, or react how the audience should be reacting as your partner is speaking.\nIf you\u2019re both sharing the spotlight at the same time and having a rehearsed conversation, make sure that your body language engages the audience and you\u2019re not just speaking to each other, ignoring the folks watching. Watch this talk with Guy Podjarny and Assaf Hefetz who have partnered up to talk about security; they have clearly identified roles onstage, and remain engaged with the audience.\n\nConsider whether or not you will share a microphone, or if you will both be mic\u2019d. (Be sure that the event organizer, or the A/V team, has a heads-up well in advance to ensure they have the equipment handy!) Also talk through how you\u2019d like to handle Q&A time during or after the talk, especially if you have clear \u201cchapters\u201d where Q&A might happen naturally during a handoff. The more clarity you and your partner have about who is responsible for which pieces of information sharing, the more you can feel and appear prepared.\nCo-presenting does take a lot of preparation and requires a ton of communication between you and your partner. But the rewards can be awesome: double the brains onstage to help answer questions and communicate information, and a friendly face to help comfort you if you feel nervous.", "year": "2016", "author": "Lara Hogan", "author_slug": "larahogan", "published": "2016-12-06T00:00:00+00:00", "url": "https://24ways.org/2016/public-speaking-with-a-buddy/", "topic": "process"} {"rowid": 296, "title": "Animation in Design Systems", "contents": "Our modern front-end workflow has matured over time to include design systems and component libraries that help us stay organized, improve workflows, and simplify maintenance. These systems, when executed well, ensure proper documentation of the code available and enable our systems to scale with reduced communication conflicts. \nBut while most of these systems take a critical stance on fonts, colors, and general building blocks, their treatment of animation remains disorganized and ad-hoc. Let\u2019s leverage existing structures and workflows to reduce friction when it comes to animation and create cohesive and performant user experiences. \nUnderstand the importance of animation\nPart of the reason we treat animation like a second-class citizen is that we don\u2019t really consider its power. When users are scanning a website (or any environment or photo), they are attempting to build a spatial map of their surroundings. During this process, nothing quite commands attention like something in motion. \nWe are biologically trained to notice motion: evolutionarily speaking, our survival depends on it. For this reason, animation when done well can guide your users. It can aid and reinforce these maps, and give us a sense that we understand the UX more deeply. We retrieve information and put it back where it came from instead of something popping in and out of place. \n\n\u201cWhere did that menu go? Oh it\u2019s in there.\u201d \n\nFor a deeper dive into how animation can connect disparate states, I wrote about the Importance of Context-Shifting in UX Patterns for CSS-Tricks.\nAn animation flow on mobile.\nAnimation also aids in perceived performance. Viget conducted a study where they measured user engagement with a standard loading GIF versus a custom animation. Customers were willing to wait almost twice as long for the custom loader, even though it wasn\u2019t anything very fancy or crazy. Just by showing their users that they cared about them, they stuck around, and the bounce rates dropped.\n\n14 second generic loading screen.22 second custom loading screen.\nThis also works for form submission. Giving your personal information over to an online process like a static form can be a bit harrowing. It becomes more harrowing without animation used as a signal that something is happening, and that some process is completing. That same animation can also entertain users and make them feel as though the wait isn\u2019t as long. \nEli Fitch gave a talk at CSS Dev Conf called: \u201cPerceived Performance: The Only Kind That Really Matters\u201d, which is one of my favorite talk titles of all time. In it, he discussed how we tend to measure things like timelines and network requests because they are more quantifiable\u2013and therefore easier to measure\u2013but that measuring how a user feels when visiting the site is more important and worth the time and attention. \nIn his talk, he states \u201cHumans over-estimate passive waits by 36%, per Richard Larson of MIT\u201d. This means that if you\u2019re not using animation to speed up how fast the wait time of a form submission loads, users are perceiving it to be much slower than the dev tools timeline is recording.\nReign it in\nUnlike fonts, colors, and so on, we tend to add animation in as a last step, which leads to disorganized implementations that lack overall cohesion. If you asked a designer or developer if they would create a mockup or build a UI without knowing the fonts they were working with, they would dislike the idea. Not knowing the building blocks they\u2019re working with means that the design can fall apart or the development can break with something so fundamental left out at the start. Good animation works the same way.\nThe first step in reigning in your use of animation is to perform an animation audit. Look at all the places you are using animation on your site, or the places you aren\u2019t using animation but probably should. (Hint: perceived performance of a loader on a form submission can dramatically change your bounce rates.) \nNot sure how to perform a good audit? Val Head has a great chapter on it in her book, Designing Interface Animations, which has of buckets of research and great ideas.\nEven some beautiful component libraries that have animation in the docs make this mistake. You don\u2019t need every kind of animation, just like you don\u2019t need every kind of font. This bloats our code. Ask yourself questions like: do you really need a flip 180 degree animation? I can\u2019t even conceive of a place on a typical UI where that would be useful, yet most component libraries that I\u2019ve seen have a mixin that does just this.\nWhich leads to\u2026\nHave an opinion\nMany people are confused about Material Design. They think that Material Design is Motion Design, mostly because they\u2019ve never seen anyone take a stance on animation before and document these opinions well. But every time you use Material Design as your motion design language, people look at your site and think GOOGLE. Now that\u2019s good branding.\nBy using Google\u2019s motion design language and not your own, you\u2019re losing out on a chance to be memorable on your own website.\nWhat does having an opinion on motion look like in practice? It could mean you\u2019ve decided that you never flip things. It could mean that your eases are always going to glide. In that instance, you would put your efforts towards finding an ease that looks \u201cgliding\u201d and pulling out any transform: scaleX(-1) animation you find on your site. Across teams, everyone knows not to spend time mocking up flipping animation (even if they\u2019re working on an entirely different codebase), and to instead work on something that feels like it glides. You save time and don\u2019t have to communicate again and again to make things feel cohesive.\nCreate good developer resources\nSometimes people don\u2019t incorporate animation into a design system because they aren\u2019t sure how, beyond the base hover states. All animation properties can be broken into interchangeable pieces. This allows developers and designers alike to mix and match and iterate quickly, while still staying in the correct language. Here are some recommendations (with code and a demo to follow):\nCreate timing units, similar to h1, h2, h3. In a system I worked on recently, I called these t1, t2, t3. T1 would be reserved for longer pieces, down to t5 which is a bit like h5 in that it\u2019s the default (usually around .25 seconds or thereabouts).\nKeep animation easings for entrance, exit, entrance emphasis and exit emphasis that people can commonly refer to. This, and the animation-fill-mode, are likely to be the only two properties that can be reused for the entrance and exit of the animation.\nUse the animation-name property to define the keyframes for the animation itself. I would recommend starting with 5 or 6 before making a slew of them, and see if you need more. Writing 30 different animations might seem like a nice resource, but just like your color palette having too many can unnecessarily bulk up your codebase, and keep it from feeling cohesive. Think critically about what you need here. \nSee the Pen Modularized Animation for Component Libraries by Sarah Drasner (@sdras) on CodePen.\n\nThe example above is pared-down, but you can see how in a robust system, having pieces that are interchangeable cached across the whole system would save time for iterations and prototyping, not to mention make it easy to make adjustments for different feeling movement on the same animation easily.\nOne low hanging fruit might be a loader that leads to a success dialog. On a big site, you might have that pattern many times, so writing up a component that does only that helps you move faster while also allowing you to really zoom in and focus on that pattern. You avoid throwing something together at the last minute, or using a GIF, which are really heavy and mushy on retina. You can make singular pieces that look really refined and are reusable. \nReact and Vue Implementations are great for reusable components, as you can create a building block with a common animation pattern, and once created, it can be a resource for all. Remember to take advantage of things like props to allow for timing and easing adjustments like we have in the previous example!\nResponsive\nAt the very least we should ensure that interaction also works well on mobile, but if we\u2019d like to create interactions that take advantage of all of the gestures mobile has to offer, we can use libraries like zingtouch or hammer to work with swipe or multiple finger detection. With a bit of work, these can all be created through native detection as well.\nResponsive web pages can specify initial-scale=1.0 in the meta tag so that the device is not waiting the required 300ms on the secondary tap before calling action. Interaction for touch events must either start from a larger touch-target (40px \u00d7 40px or greater) or use @media(pointer:coarse) as support allows.\nBuy-in\nSometimes people don\u2019t create animation resources simply because it gets deprioritized. But design systems were also something we once had to fight for, too. This year at CSS Dev Conf, Rachel Nabors demonstrated how to plot out animation wants vs. needs on a graph (reproduced with her permission) to help prioritize them:\n\n\nThis helps people you\u2019re working with figure out the relative necessity and workload of the addition of these animations and think more critically about it. You\u2019re also more likely to get something through if you\u2019re proving that what you\u2019re making is needed and can be reused. \nGood compromises can be made this way: \u201cwe\u2019re not going to go all out and create an animated \u2018About Us\u2019 page like you wanted, but I suppose we can let our users know their contact email went through with a small progress and success notification.\u201d \nSuccessfully pushing smaller projects through helps build trust with your team, and lets them see what this type of collaboration can look like. This builds up the type of relationship necessary to push through projects that are more involved. It can\u2019t be overstressed that good communication is key.\nGet started!\nWith these tools and good communication, we can make our codebases more efficient, performant, and feel better for our users. We can enhance the user experience on our sites, and create great resources for our teams to allow them to move more quickly while innovating beautifully.", "year": "2016", "author": "Sarah Drasner", "author_slug": "sarahdrasner", "published": "2016-12-16T00:00:00+00:00", "url": "https://24ways.org/2016/animation-in-design-systems/", "topic": "code"} {"rowid": 295, "title": "Internet of Stranger Things", "contents": "This year I\u2019ve been running a workshop about using JavaScript and Node.js to work with all different kinds of electronics on the Raspberry Pi. So especially for 24 ways I\u2019m going to show you how I made a very special Raspberry Pi based internet connected project! And nothing says Christmas quite like a set of fairy lights connected to another dimension1.\nWhat you\u2019ll see\nYou can rig up the fairy lights in your home, with the scrawly letters written under each one. The people from the other side (i.e. the internet) will be able to write messages to you from their browser in real time. In fact why not try it now; check this web page. When you click the lights in your browser, my lights (and yours) will turn on and off in real life! (There may be a queue if there are lots of people accessing it, hit the \u201cSend a message\u201d button and wait your turn.)\n\n\n\n\nIt\u2019s all done with JavaScript, using Node.js running on both the Raspberry Pi and on the server. I\u2019m using WebSockets to communicate in real time between the browser, server and Raspberry Pi.\nWhat you\u2019ll need\n\nRaspberry Pi any of the following models: Zero (will need straight male header pins soldered2 and Micro USB OTG adaptor), A+, B+, 2, or 3\nMicro SD card at least 4Gb Class 10 speed3\nMicro USB power supply at least 2A\nUSB Wifi dongle (unless you have a Pi 3 - that has wifi built in). \nAddressable fairy lights\nLogic level shifter (with pins soldered unless you want to do it!)\nBreadboard\nJumper wires (3x male to male and 4x female to male)\n\nOptional but recommended\n\nBase board to hold the Pi and Breadboard (often comes with a breadboard!)\n\nFind links for where to buy all of these items that goes along with this tutorial. The total price should be around $1004.\nSetting up the Raspberry Pi\nYou\u2019ll need to install the SD card for the Raspberry Pi. You\u2019ll find a link to download a disk image on the support document, ready-made with the Raspbian version of Linux, along with Node.js and all the files you need. Download it and write it to the SD card using the fantastic free software Etcher5. \nNext up you have to configure the wifi details on the SD card. If you plug the card into your computer you should see a drive called BOOT. There\u2019s a text file on there called wpa_supplicant.conf. Open it up in your favourite text editor and replace mywifi and mypassword with your wifi details6.\nnetwork={\n ssid=\"mywifi\"\n psk=\"mypassword\"\n}\nSave the file, eject the card from your computer and plug it into the Raspberry Pi. \nIf you have a base board or holder for the Raspberry Pi, attach it now. Then connect the wifi USB dongle7 and power supply, but don\u2019t plug it in yet!\nWiring!\nTime to wire everything up! \nFirst of all, push the Logic Level Converter into the middle of the breadboard:\n\nLogic Level Converter\nThe logic level converter may be labelled differently from the one in the diagram but the pins are usually exactly the same internally. I would just make sure the pins marked HV (High Voltage) are on the bottom and LV (Low Voltage) are on the top. \n\nRaspberry Pi pins only output 3.3v but the lights need 5v. That\u2019s why we need the logic level converter in there to boost up the signal.\nConnect the first two wires between the Raspberry Pi pins and the breadboard:\n\nNote that the pins on the Raspberry Pi are male, so you need a female to male jumper wire to connect between them and the breadboard. The colours don\u2019t have to match but it\u2019s easier to follow (and check) if you use the same ones as in the diagram. \n\nThen the next two:\n\nThis is what you should have so far:\n\nLights\nNow to connect the lights! My ones have a connector with three holes in it that I can push jumper wires into, and hopefully yours will too! So I used the male-to-male jumper wires to connect them to the breadboard.\n\n\n\nMake sure that you connect the right end of the lights, mine has a male connector at the wrong end so it\u2019s impossible to do this, but double check. \nAlso make sure that the holes in the light connector are the same as mine. To do this, follow the wires from the connector to the first light and look at the circuit board inside. You should just about be able to make out the connections labelled + (sometimes 5V, V+ or VCC), GND (or \u2018-\u2019 or G) and DI (sometimes DIN for data in).\n\nYou can just about make out the +, DI and GND on this picture. Note that on the other side of the board there is a DO for data out - that\u2019s what takes the data along to the chip in the next light. Make sure that you\u2019re plugging into the data-in and not the data-out! \nThat\u2019s it! Everything\u2019s plugged in and ready to go! But before you plug power into your Pi, double check all your wires and make sure they\u2019re exactly right! You could damage your Raspberry Pi if it is not wired correctly. So triple check!\n\nThe Moment of Truth!\nPlug in the Raspberry Pi and wait around a minute or two for it to boot up. If all is well, the lights should strobe rainbow colours for one second - that\u2019s your confirmation that it\u2019s connected to my WebSocket server and ready to receive messages from the upside-down! \n\nHowever, if the first light in the string is pulsing red, it means that you\u2019re not connected to the internet. So check the Troubleshooting section of the support document. If it\u2019s pulsing green then you\u2019re connected to the internet but can\u2019t connect to my server. It must have gone down. Sorry! The code will keep trying so leave it running and maybe it\u2019ll come back up. \nRig up the lights!\nFix the lights up on the wall however you want, pins, nails, tape. I\u2019ve used cable clips. Just be careful! I\u2019m using a 50 light string so I\u2019ve programmed it to use the lights at the end for the letters. That way I have just under half the string to extend down to the floor where I can keep the Raspberry Pi. \nCheck the photo here to see how the lights line up, note that there are spare unused lights in-between each row:\n\nNow visit lights.seb.ly and you\u2019ll see this : \n\nIf you\u2019re the only one online you\u2019ll have direct connection to the lights and any letter you click on will light up both in the browser and in real life. If there are other people there, you\u2019ll need to click the button to join the queue and wait your turn. \nHow it works - the geeky details!\nElectronics:\nThe pins on the Raspberry Pi are known as GPIO pins, general-purpose input/output. You can connect a wide variety of electronic components to them, LED lights, buttons, switches, and sensors. You can turn the power to the pins on and off using Node.js (or Python, if you prefer). \nAddressable LEDs or \u201cNeopixels\u201d\nWe\u2019re only using one GPIO pin on the Raspberry Pi (the other connections are 5V, 3.3V and ground) and that single pin is controlling all of the lights in the string. The code turns the pin on and off really fast in strictly timed morse-code-like dots and dashes to transmit binary data. The chips attached to each LED decode the binary and adjust the output to the LED accordingly. That chip then sends the data on to the next light in the string. \nThe chips on each light are the WS2811, part of the WS281x family that come in a multitude of different form factors and are often packaged with tiny LEDs in a single component. They are commonly referred to as Neopixels8 and I used them on my Laser Light Synths project.\nNeopixels with the chip and the LED all in one - it\u2019s the white square shaped component and the darker square inside is the chip. These are only 5mm wide!\nA Laser Light Synth! Covered with around 800 super bright neopixels!\nLogic Level Converter\nThe logic level converter is a really cheap and easy way to change the level from 3.3v to 5v and back again. You must be careful that you do not connect 5v into a GPIO pin or you will most likely damage the Raspberry Pi processor chip. \nPower\nNeopixels can often draw a lot of current so you need to be careful how you power them. I\u2019ve measured the current draw from the string to be less than 800mA so you should be fine wired directly to the 5V output. But if you use more lights or have them all on really bright at once, you\u2019ll need to use a separate 5V power supply. If you want to learn more, check out Adafruit\u2019s Neopixel Uberguide. \nNode.js\nThere are two Node.js apps running here, one on the Raspberry Pi and one on my server. You can see the code on my GitHub at github.com/sebleedelisle/stranger-lights for the Raspberry Pi and github.com/sebleedelisle/stranger-lights-server for the server. And they\u2019re hosted on npm as stranger-lights and stranger-lights-server. \nThe server side code sets up a standard web server to deliver the HTML for the web interface. It also sets up a WebSocket server that allows for real-time communication between the browser and the server. This server code also manages the queue and who is in control of the lights at any given time.\nWebSockets\nI\u2019m using the excellent Socket.io library to manage the WebSocket connection. Both the browser and the Raspberry Pi Node.js app connects to my WebSocket server. \nWhen you click on a letter in the browser, a message is sent to the server, which forwards it to the connected Raspberry Pi clients and also all the web browsers9. \nThe Raspberry Pi code\nThe Node.js app runs automatically on startup, and I made this happen by adding this to the /etc/rc.local file: \nnode /home/pi/strangerthings/client.js > /dev/null &\nAnything in the rc.local file gets executed when the Pi boots up and this line of code runs the Node.js app and routes its output to nowhere (ie /dev/null). The & means that it runs it in the background and doesn\u2019t hold up the boot process. \nWorking with the Raspberry Pi headless\nYou might know that when a computer has no screen or keyboard, you would refer to it as \u201crunning headless\u201d. So just like most web servers, you need to configure it over the network with ssh10. If you\u2019re on a mac you can find your Pi on the network through the name raspberrypi.local11, otherwise you\u2019ll need to find its IP address. There\u2019s more on the guide to Remote Access instructions on the Raspberry Pi website. And if you\u2019re very new to the terminal, I highly recommend this great online Linux command line tutorial.\nImprovements\nThis is quite an early experiment and I\u2019m sure I\u2019ll discover lots of optimisations over the next few weeks, especially if the server gets a proper hammering today! But there are a few things you can do. Obviously I\u2019ve just rigged up my lights with Post-it notes. It\u2019d be a lot nicer to get a paint brush and try to recreate the Winona-in-a-manic-state text style. \nWhere next?\nFinding quality resources about Node.js for electronics on the Pi can be somewhat hit and miss, but this is getting better all the time. Alternatively I am thinking about running some online courses, please let me know if that\u2019s something you\u2019d be interested in, or sign up to my mailing list at st4i.com. \nThere are many many more resources for the Raspberry Pi with Python (gpiozero is a good place to start), so if that language works for you, you\u2019ll be spoilt for choice! \nAlso take a look at Arduino - it\u2019s an incredibly popular platform for electronics and the internet is literally bursting with resources. \nI hope you enjoyed this little foray into the world of JavaScript electronics on the Raspberry Pi! If you get this working at home please let me know! Tweet me at @seb_ly. \n\n\n\n\nNot a particularly original idea, but I don\u2019t think I\u2019ve seen anyone do it quite like this before, ie using WebSockets, and Node.js on a Raspberry Pi. Other examples: Internet of Stranger Things, Strangerlights.com, and loads of examples on Instructables\u00a0\u21a9\ufe0e\n\n\nVideo guide to soldering pins on to a Pi Zero and further soldering advice from Adafruit\u00a0\u21a9\ufe0e\n\n\nSlower cards will work but performance may suffer\u00a0\u21a9\ufe0e\n\n\nOr \u00a35,000 in UK money. Sorry, Brexit joke :)\u00a0\u21a9\ufe0e\n\n\nYou will need a card reader on your computer - most micro SD cards come with an adaptor that fits standard SD slots. \u00a0\u21a9\ufe0e\n\n\nSSID and password should be all that you need but you can see all the config options on this wpa supplicant guide\u00a0\u21a9\ufe0e\n\n\nRaspberry Pi Zero will require the OTG to USB adaptor to attach the wifi dongle\u00a0\u21a9\ufe0e\n\n\nThanks to Adafruit who invented the term neopixels so we don\u2019t have to refer to them as WS281x any more!\u00a0\u21a9\ufe0e\n\n\nSo you can see other people sending messages in the browser\u00a0\u21a9\ufe0e\n\n\nssh is short for Secure Shell and is a way to connect to a remote computer and type in it just like you would in the terminal.\u00a0\u21a9\ufe0e\n\n\nYou can change this default hostname using raspi-config\u00a0\u21a9\ufe0e", "year": "2016", "author": "Seb Lee-Delisle", "author_slug": "sebleedelisle", "published": "2016-12-01T00:00:00+00:00", "url": "https://24ways.org/2016/internet-of-stranger-things/", "topic": "code"} {"rowid": 294, "title": "New Tricks for an Old Dog", "contents": "Much of my year has been spent helping new team members find their way around the expansive and complex codebase that is the TweetDeck front-end, trying to build a happy and productive group of people around a substantial codebase with many layers of legacy.\nI\u2019ve loved doing this. Everything from writing new documentation, drawing diagrams, and holding technical architecture sessions teaches you something you didn\u2019t know or exposes an area of uncertainty that you can go work on.\nIn this article, I hope to share some experiences and techniques that will prove useful in your own situation and that you can impress your friends in some new and exciting ways!\nHow do you do, fellow kids?\nTo start with I\u2019d like to introduce you to our JavaScript framework, Flight. Right now it\u2019s used by twitter.com and TweetDeck although, as a company, Twitter is largely moving to React.\nOver time, as we used Flight for more complex interfaces, we found it wasn\u2019t scaling with us.\nComposing components into trees was fiddly and often only applied for a specific parent-child pairing. It seems like an obvious feature with hindsight, but it didn\u2019t come built-in to Flight, and it made reusing components a real challenge.\nThere was no standard way to manage the state of a component; they all did it slightly differently, and the technique often varied by who was writing the code. This cost us in maintainability as you just couldn\u2019t predict how a component would be built until you opened it.\nMaking matters worse, Flight relied on events to move data around the application. Unfortunately, events aren\u2019t good for giving structure to complex logic. They jump around in a way that\u2019s hard to understand and debug, and force you to search your code for a specific string \u2014 the event name\u201a to figure out what\u2019s going on.\nTo find fixes for these problems, we looked around at other frameworks. We like React for it\u2019s simple, predictable state management and reactive re-render flow, and Elm for bringing strict functional programming to everyone.\nBut when you have lots of existing code, rewriting or switching framework is a painful and expensive option. You have to understand how it will interact with your existing code, how you\u2019ll test it alongside existing code, and how it will affect the size and performance of the application. This all takes time and effort!\nInstead of planning a rewrite, we looked for the ideas hidden within other frameworks that we could reapply in our own situation or bring to the tools we already were using.\nBoiled down, what we liked seemed quite simple:\n\nComponent nesting & composition\nEasy, predictable state management\nNormal functions for data manipulation\n\nMaking these ideas applicable to Flight took some time, but we\u2019re in a much better place now. Through persistent trial-and-error, we have well documented, testable and standard techniques for creating complex component hierarchies, updating and reacting to state changes, and passing data around the app.\nWhile the specifics of our situation and Flight aren\u2019t really important, this experience taught me something: \n\nDistill good tech into great ideas. You can apply great ideas anywhere.\n\nYou don\u2019t have to use cool kids\u2019 latest framework, hottest build tool or fashionable language to benefit from them. If you can identify a nugget of gold at the heart of it all, why not use it to improve what you have already?\nTimes, they are a changin\u2019\nApart from stealing ideas from the new and shiny, how can we keep make the most of improved tooling and techniques? Times change and so should the way we write code.\nGoing back in time a bit, TweetDeck used some slightly outmoded tools for building and bundling. Without a transpiler like Babel we were missing out new language features, and without a more advanced build tools like Webpack, every module\u2019s source was encased in AMD boilerplate.\nIn fact, we found ourselves with a mix of both AMD syntaxes:\ndefine([\"lodash\"], function (_) {\n // . . .\n});\n\ndefine(function (require) {\n var _ = require(\"lodash\");\n // . . .\n});\nThis just wouldn\u2019t do. And besides, what we really wanted was CommonJS, or even ES2015 module syntax:\nimport _ from \"lodash\";\nThese days we\u2019re using Babel, Webpack, ES2015 modules and many new language features that make development just\u2026 better. But how did we get there?\nTo explain, I want to introduce you to codemods and jscodeshift.\nA codemod is a large-scale refactor of a whole codebase, often mechanical or repetitive. Think of renaming a module or changing an API like URL(\"...\") to new URL(\"...\").\njscodeshift is a toolkit for running automated codemods, where you express a code transformation using code. The automated codemod operates on each file\u2019s syntax tree \u2013 a data-structure representation of the code \u2014 finding and modifying in place as it goes.\nHere\u2019s an example that renames all instances of the variable foo to bar:\nmodule.exports = function (fileInfo, api) {\n return api\n .jscodeshift(fileInfo.source)\n .findVariableDeclarators('foo')\n .renameTo('bar')\n .toSource();\n};\nIt\u2019s a seriously powerful tool, and we\u2019ve used it to write a series of codemods that:\n\nrename modules,\nunify our use of AMD to a single syntax,\ntransition from one testing framework to another, and\nswitch from AMD to CommonJS.\n\nThese changes can be pretty huge and far-reaching. Here\u2019s an example commit from when we switched to CommonJS:\ncommit 8f75de8fd4c702115c7bf58febba1afa96ae52fc\nDate: Tue Jul 12 2016\n\n Run AMD -> CommonJS codemod\n\n 418 files changed, 47550 insertions(+), 48468 deletions(-)\n\nYep, that\u2019s just under 50k lines changed, tested, merged and deployed without any trouble. AMD be gone!\n\nFrom this step-by-step approach, using codemods to incrementally tweak and improve, we extracted a little codemod recipe for making significant, multi-stage changes:\n\nFind all the existing patterns\nChoose the two most similar\nUnify with a codemod\nRepeat.\n\nFor example:\n\nFor module loading, we had 2 competing AMD patterns plus some use of CommonJS\nThe two AMD syntaxes were the most similar\nWe used a codemod to move to unify the AMD patterns\nLater we returned to AMD to convert it to CommonJS\n\nIt\u2019s worked for us, and if you\u2019d like to know more about codemods then check out Evolving Complex Systems Incrementally by Facebook engineer, Christoph Pojer.\nWelcome aboard!\nAs TweetDeck has gotten older and larger, the amount of things a new engineer has to learn about has exploded. The myriad of microservices that manage our data and their layers of authentication, security and business logic around them make for an overwhelming amount of information to hand to a newbie.\nInspired by Amy\u2019s amazing Guide to the Care and Feeding of Junior Devs, we realised it was important to take time to design our onboarding that each of our new hires go through to make the most of their first few weeks.\nJoining a new company, team, or both, is stressful and uncomfortable. Everything you can do to help a new hire will be valuable to them. So please, take time to design your onboarding!\nAnd as you build up an onboarding process, you\u2019ll create things that are useful for more than just new hires; it\u2019ll force you to write documentation, for example, in a way that\u2019s understandable for people who are unfamiliar with your team, product and codebase. This can lead to more outside contributions: potential contributors feel more comfortable getting set up on your product without asking for help.\nThis is something that\u2019s taken for granted in open source, but somehow I think we forget about it in big companies.\nAfter all, better documentation is just a good thing. You will forget things from time to time, and you\u2019d be surprised how often the \u201cbeginner\u201d docs help!\nFor TweetDeck, we put together system and architecture diagrams, and one-pager explanations of important concepts:\n\nWhat are our dependencies?\nWhere are the potential points of failure?\nWhere does authentication live? Storage? Caching?\nWho owns \u201cX\u201d?\n\n\nOf course, learning continues long after onboarding. The landscape is constantly shifting; old services are deprecated, new APIs appear and what once true can suddenly be very wrong. Keeping up with this is a serious challenge, and more than any one person can track.\nTo address this, we\u2019ve thought hard about our knowledge sharing practices across the whole team. For example, we completely changed the way we do code review.\nIn my opinion, code review is the single most effective practice you can introduce to share knowledge around, and build the quality and consistency of your team\u2019s work. But, if you\u2019re not doing it, here\u2019s my suggestion for getting started:\n\nEvery pull request gets a +1 from someone else.\n\nThat\u2019s all \u2014 it\u2019s very light-weight and easy. Just ask someone to have a quick look over your code before it goes into master.\nAt Twitter, every commit gets a code review. We do a lot of reviewing, so small efficiency and effectiveness improvements make a big difference. Over time we learned some things:\n\nDon\u2019t review for more than hour 1\nKeep reviews smaller than ~400 lines 2\nCode review your own code first 2\n\nAfter an hour, and above roughly 400 lines, your ability to detect issues in a code review starts to decrease. So review little and often. The gaps around lunch, standup and before you head home are ideal. And remember, if someone\u2019s put code up for a review, that review is blocking them doing other work. It\u2019s your job to unblock them.\nOn TweetDeck, we actually try to keep reviews under 250 lines. It doesn\u2019t sound like much, but this constraint applies pressure to make smaller, incremental changes. This makes breakages easier to detect and roll back, and leads to a very natural feature development process that encourages learning and iteration.\nBut the most important thing I\u2019ve learned personally is that reviewing my own code is the best way to spot issues. I try to approach my own reviews the way I approach my team\u2019s: with fresh, critical eyes, after a break, using a dedicated code review tool.\nIt\u2019s amazing what you can spot when you put a new in a new interface around code you\u2019ve been staring at for hours!\nAnd yes, this list features science. The data backs up these conclusions, and if you\u2019d like to learn more about scientific approaches to software engineering then I recommend you buy Making Software: What Really Works, and Why We Believe It. It\u2019s ace.\nFor more dedicated information sharing, we\u2019ve introduced regular seminars for everyone who works on a specific area or technology. It works like this: a team-member shares or teaches something to everyone else, and next time it\u2019s someone else\u2019s turn. Giving everyone a chance to speak, and encouraging a wide range of topics, is starting to produce great results.\nIf you\u2019d like to run a seminar, one thing you could try to get started: run a point at the thing you least understand in our architecture session \u2014 thanks to James for this idea. And guess what\u2026 your onboarding architecture diagrams will help (and benefit from) this!\nMore, please!\nThere\u2019s a few ideas here to get you started, but there are even more in a talk I gave this year called Frontend Archaeology, including a look at optimising for confidence with front-end operations.\nAnd finally, thanks to Amy for proof reading this and to Passy for feedback on the original talk.\n\n\n\n\nDunsmore et al. 2000. Object-Oriented Inspection in the Face of Delocalisation. Beverly, MA: SmartBear Software.\u00a0\u21a9\n\n\nCohen, Jason. 2006. Best Kept Secrets of Peer Code Review. Proceedings of the 22nd ICSE 2000: 467-476.\u00a0\u21a9 \u21a9", "year": "2016", "author": "Tom Ashworth", "author_slug": "tomashworth", "published": "2016-12-18T00:00:00+00:00", "url": "https://24ways.org/2016/new-tricks-for-an-old-dog/", "topic": "code"} {"rowid": 293, "title": "A Favor for Your Future Self", "contents": "We tend to think about the future when we build things. What might we want to be able to add later? How can we refactor this down the road? Will this be easy to maintain in six months, a year, two years? As best we can, we try to think about the what-ifs, and build our websites, systems, and applications with this lens. \nWe comment our code to explain what we knew at the time and how that impacted how we built something. We add to-dos to the things we want to change. These are all great things! Whether or not we come back to those to-dos, refactor that one thing, or add new features, we put in a bit of effort up front just in case to give us a bit of safety later.\nI want to talk about a situation that Past Alicia and Team couldn\u2019t even foresee or plan for. Recently, the startup I was a part of had to remove large sections of our website. Not just content, but entire pages and functionality. It wasn\u2019t a very pleasant experience, not only for the reason why we had to remove so much of what we had built, but also because it\u2019s the ultimate \u201cI really hope this doesn\u2019t break something else\u201d situation. It was a stressful and tedious effort of triple checking that the things we were removing weren\u2019t dependencies elsewhere. To be honest, we wouldn\u2019t have been able to do this with any amount of success or confidence without our test suite.\nWriting tests for code is one of those things that developers really, really don\u2019t want to do. It\u2019s one of the easiest things to cut in the development process, and there\u2019s often a struggle to have developers start writing tests in the first place. One of the best lessons the web has taught us is that we can\u2019t, in good faith, trust the happy path. We must make sure ourselves, and our users, aren\u2019t in a tough spot later on because we only thought of the best case scenarios.\nJavaScript\nRegardless of your opinion on whether or not everything needs to be built primarily with JavaScript, if you\u2019re choosing to build a JavaScript heavy app, you absolutely should be writing some combination of unit and integration tests.\nUnit tests are for testing extremely isolated and small pieces of code, which we refer to as the units themselves. Great for reused functions and small, scoped areas, this is the closest you examine your code with the testing microscope. For example, if we were to build a calculator, the most minute piece we could test could be the basic operations.\n/*\n * This example uses a test framework called Jasmine\n */\n\ndescribe(\"Calculator Operations\", function () {\n\n it(\"Should add two numbers\", function () {\n\n // Say we have a calculator\n Calculator.init();\n\n // We can run the function that does our addition calculation...\n var result = Calculator.addNumbers(7,3);\n\n // ...and ensure we're getting the right output\n expect(result).toBe(10);\n\n });\n});\nEven though these teeny bits work in isolation, we should ensure that connecting the large pieces work, as well. This is where integration tests excel. These tests ensure that two or more different areas of code, that may not directly know about each other, still behave in expected ways. Let\u2019s build upon our calculator - we may want the operations to be saved in memory after a calculation runs. This isn\u2019t as suited for a unit test because there are a few other moving pieces involved in the process (the calculations, checking if the result was an error, etc.).\n it(\u201cShould remember the last calculation\u201d, function () {\n\n // Run an operation\n Calculator.addNumbers(7,10);\n\n // Expect something else to have happened as a result\n expect(Calculator.updateCurrentValue).toHaveBeenCalled();\n expect(Calculator.currentValue).toBe(17);\n });\nUnit and integration tests provide assurance that your hand-rolled JavaScript should, for the most part, never fail in a grand fashion. Although it still might happen, you could be able to catch problems way sooner than without a test suite, and hopefully never push those failures to your production environment.\nInterfaces\nRegardless of how you\u2019re building something, it most definitely has some kind of interface. Whether you\u2019re using a very barebones structure, or you\u2019re leveraging a whole design system, these things can be tested as well.\nAcceptance testing helps us ensure that users can get from point A to point B within our web things, which can provide assurance that major features are always functioning properly. By simulating user input and data entry, we can go through whole user workflows to test for both success and failure scenarios. These are not necessarily for simulating edge-case scenarios, but rather ensuring that our core offerings are stable.\nFor example, if your site requires signup, you want to make sure the workflow is behaving as expected - allowing valid information to go through signup, while invalid information does not let you progress.\n/*\n * This example uses Jasmine along with an add-on called jasmine-integration\n */\n\ndescribe(\"Acceptance tests\", function () {\n\n // Go to our signup page\n var page = visit(\"/signup\");\n\n // Fill our signup form with invalid information\n page.fill_in(\"input[name='email']\", \"Not An Email\");\n page.fill_in(\"input[name='name']\", \"Alicia\");\n page.click(\"button[type=submit]\");\n\n // Check that we get an expected error message\n it(\"Shouldn't allow signup with invalid information\", function () {\n expect(page.find(\"#signupError\").hasClass(\"hidden\")).toBeFalsy();\n });\n\n // Now, fill our signup form with valid information\n page.fill_in(\"input[name='email']\", \"thisismyemail@gmail.com\");\n page.fill_in(\"input[name='name']\", \"Gerry\");\n page.click(\"button[type=submit]\");\n\n // Check that we get an expected success message and the error message is hidden\n it(\"Should allow signup with valid information\", function () {\n expect(page.find(\"#signupError\").hasClass(\"hidden\")).toBeTruthy();\n expect(page.find(\"#thankYouMessage\").hasClass(\"hidden\")).toBeFalsy();\n });\n});\nIn terms of visual design, we\u2019re now able to take snapshots of what our interfaces look like before and after any code changes to see what has changed. We call this visual regression testing. Rather than being a pass or fail test like our other examples thus far, this is more of an awareness test, intended to inform developers of all the visual differences that have occurred, intentional or not. Developers may accidentally introduce a styling change or fix that has unintended side effects on other areas of a website - visual regression testing helps us catch these sooner rather than later. These do require a bit more consistent grooming than other tests, but can be valuable in major CSS refactors or if your CSS is generally a bit like Jenga.\nTools like PhantomCSS will take screenshots of your pages, and do a visual comparison to check what has changed between two sets of images. The code would look something like this:\n/*\n * This example uses PhantomCSS\n */\n\ncasper.start(\"/home\").then(function(){\n\n // Initial state of form\n phantomcss.screenshot(\"#signUpForm\", \"sign up form\");\n\n // Hit the sign up button (should trigger error)\n casper.click(\"button#signUp\");\n\n // Take a screenshot of the UI component\n phantomcss.screenshot(\"#signUpForm\", \"sign up form error\");\n\n // Fill in form by name attributes & submit\n casper.fill(\"#signUpForm\", {\n name: \"Alicia Sedlock\",\n email: \"alicia@example.com\"\n }, true);\n\n // Take a second screenshot of success state\n phantomcss.screenshot(\"#signUpForm\", \"sign up form success\");\n});\nYou run this code before starting any development, to create your baseline set of screen captures. After you\u2019ve completed a batch of work, you run PhantomCSS again. This will create a second batch of screenshots, which are then put through an image comparison tool to display any differences that occurred. Say you changed your margins on our form elements \u2013 your image diff would look something like this:\n\nThis is a great tool for ensuring not just your site retains its expected styling, but it\u2019s also great for ensuring nothing accidentally changes in the living style guide or modular components you may have developed. It\u2019s hard to keep eagle eyes on every visual aspect of your site or app, so visual regression testing helps to keep these things monitored.\nConclusion\nThe shape and size of what you\u2019re testing for your site or app will vary. You may not need lots of unit or integration tests if you don\u2019t write a lot of JavaScript. You may not need visual regression testing for a one page site. It\u2019s important to assess your codebase to see which tests would provide the most benefit for you and your team.\nWriting tests isn\u2019t a joy for most developers, myself included. But I end up thanking Past Alicia a lot when there are tests, because otherwise I would have introduced a lot of issues into codebases. Shipping code that\u2019s broken breaks trust with our users, and it\u2019s our responsibility as developers to make sure that trust isn\u2019t broken. Testing shouldn\u2019t be considered a \u201cnice to have\u201d - it should be an integral piece of our workflow and our day-to-day job.", "year": "2016", "author": "Alicia Sedlock", "author_slug": "aliciasedlock", "published": "2016-12-03T00:00:00+00:00", "url": "https://24ways.org/2016/a-favor-for-your-future-self/", "topic": "code"} {"rowid": 292, "title": "Watch Your Language!", "contents": "I\u2019m bilingual. My first language is French. I learned English in my early 20s. Learning a new language later in life meant that I was able to observe my thought processes changing over time. It made me realize that some concepts can\u2019t be expressed in some languages, while other languages express these concepts with ease.\nIt also helped me understand the way we label languages. English: business. French: romance. Here\u2019s an example of how words, or the absence thereof, can affect the way we think:\nIn French we love everything. There\u2019s no straightforward way to say we like something, so we just end up loving everything. I love my sisters, I love broccoli, I love programming, I love my partner, I love doing laundry (this is a lie), I love my mom (this is not a lie). I love, I love, I love. It\u2019s no wonder French is considered romantic. When I first learned English I used the word love rather than like because I hadn\u2019t grasped the difference. Needless to say, I\u2019ve scared away plenty of first dates!\nLearning another language made me realize the limitations of my native language and revealed concepts I didn\u2019t know existed. Without the nuances a given language provides, we fail to express what we really think. The absence of words in our vocabulary gets in the way of effectively communicating and considering ideas.\nWhen I lived in Montr\u00e9al, most people in my circle spoke both French and English. I could switch between them when I could more easily express an idea in one language or the other. I liked (or should I say loved?) those conversations. They were meaningful. They were efficient.\n\nI\u2019m quadrilingual. I code in Ruby, HTML/CSS, JavaScript, Python. In the past couple of years I have been lucky enough to write code in these languages at a massive scale. In learning Ruby, much like learning English, I discovered the strengths and limitations of not only the languages I knew but the language I was learning. It taught me to choose the right tool for the job.\nWhen I started working at Shopify, making a change to a view involved copy/pasting HTML and ERB from one view to another. The CSS was roughly structured into modules, but those modules were not responsive to different screen sizes. Our HTML was complete mayhem, and we didn\u2019t consider accessibility. All this made editing views a laborious process.\nGrep. Replace all. Test. Ship it. Repeat.\nThis wasn\u2019t sustainable at Shopify\u2019s scale, so the newly-formed front end team was given two missions:\n\nMake the app responsive (AKA Let\u2019s Make This Thing Responsive ASAP)\nMake the view layer scalable and maintainable (AKA Let\u2019s Build a Pattern Library\u2026 in Ruby)\n\nLet\u2019s make this thing responsive ASAP\nThe year was 2015. The Shopify admin wasn\u2019t mobile friendly. Our browser support was set to IE10. We had the wind in our sails. We wanted to achieve complete responsiveness in the shortest amount of time. Our answer: container queries.\nIt seemed like the obvious decision at the time. We would be able to set rules for each component in isolation and the component would know how to lay itself out on the page regardless of where it was rendered. It would save us a ton of development time since we wouldn\u2019t need to change our markup, it would scale well, and we would achieve complete component autonomy by not having to worry about page layout. By siloing our components, we were going to unlock the ultimate goal of componentization, cutting the tie to external dependencies. We were cool.\nWriting the JavaScript handling container queries was my first contribution to Shopify. It was a satisfying project to work on. We could drop our components in anywhere and they would magically look good. It took us less than a couple weeks to push this to production and make our app mostly responsive.\nBut with time, it became increasingly obvious that this was not as performant as we had hoped. It wasn\u2019t performant at all. Components would jarringly jump around the page before settling in on first paint.\nIt was only when we started using the flex-wrap: wrap CSS property to build new components that we realized we were not using the right language for the job. So we swapped out JavaScript container queries for CSS flex-wrapping. Even though flex wasn\u2019t yet as powerful as we wanted it to be, it was still a good compromise. Our components stayed independent of the window size but took much less time to render. Best of all: they used CSS instead of relying on JavaScript for layout.\nIn other words: we were using the wrong language to express our layout to the browser, when another language could do it much more simply and elegantly.\nLet\u2019s build a pattern library\u2026 in Ruby\nIn order to make our view layer maintainable, we chose to build a comprehensive library of helpers. This library would generate our markup from a single source of truth, allowing us to make changes system-wide, in one place. No. More. Grepping.\nWhen I joined Shopify it was a Rails shop freshly wounded by a JavaScript framework (See: Batman.js). JavaScript was like Voldemort, the language that could not be named. Because of this baggage, the only way for us to build a pattern library that would get buyin from our developers was to use Rails view helpers. And for many reasons using Ruby was the right choice for us. The time spent ramping developers up on the new UI Components would be negligible since the Ruby API felt familiar. The transition would be simple since we didn\u2019t have to introduce any new technology to the stack. The components would be fast since they would be rendered on the server. We had a plan.\nWe put in place a set of Rails tools to make it easy to build components, then wrote a bunch of sweet, sweet components using our shiny new tools. To document our design, content and front end patterns we put together an interactive styleguide to demonstrate how every component works. Our research and development department loved it (and still do)! We continue to roll out new components, and generally the project has been successful, though it has had its drawbacks.\nSince the Shopify admin is mostly made up of a huge number of forms, most of the content is static. For this reason, using server-rendered components didn\u2019t seem like a problem at the time. With new app features increasing the amount of DOM manipulation needed on the client side, our early design decisions mean making requests to the server for each re-paint. This isn\u2019t going to cut it.\nI don\u2019t know the end of this story, because we haven\u2019t written it yet. We\u2019ve been exploring alternatives to our current system to facilitate the rendering of our components on the client, including React, Vue.js, and Web Components, but we haven\u2019t determined the winner yet. Only time (and data gathering) will tell.\nRuby is great but it doesn\u2019t speak the browser\u2019s language efficiently. It was not the right language for the job.\n\nLearning a new spoken language has had an impact on how I write code. It has taught me that you don\u2019t know what you don\u2019t know until you have the language to express it. Understanding the strengths and limitations of any programming language is fundamental to making good design decisions. At the end of the day, you make the best choices with the information you have. But if you still feel like you\u2019re unable to express your thoughts to the fullest with what you know, it might be time to learn a new language.", "year": "2016", "author": "Annie-Claude C\u00f4t\u00e9", "author_slug": "annieclaudecote", "published": "2016-12-10T00:00:00+00:00", "url": "https://24ways.org/2016/watch-your-language/", "topic": "code"} {"rowid": 291, "title": "Information Literacy Is a Design Problem", "contents": "Information literacy, wrote Dr. Carol Kulthau in her 1987 paper \u201cInformation Skills for an Information Society,\u201d is \u201cthe ability to read and to use information essential for everyday life\u201d\u2014that is, to effectively navigate a world built on \u201ccomplex masses of information generated by computers and mass media.\u201d\nNearly thirty years later, those \u201ccomplex masses of information\u201d have only grown wilder, thornier, and more constant. We call the internet a firehose, yet we\u2019re loathe to turn it off (or even down). The amount of information we consume daily is staggering\u2014and yet our ability to fully understand it all remains frustratingly insufficient. \nThis should hit a very particular chord for those of us working on the web. We may be developers, designers, or strategists\u2014we may not always be responsible for the words themselves\u2014but we all know that communication is much more than just words. From fonts to form fields, every design decision that we make changes the way information is perceived\u2014for better or for worse.\nWhat\u2019s more, the design decisions that we make feed into larger patterns. They don\u2019t just affect the perception of a single piece of information on a single site; they start to shape reader expectations of information anywhere. Users develop cumulative mental models of how websites should be: where to find a search bar, where to look at contact information, how to filter a product list. \nAnd yet: our models fail us. Fundamentally, we\u2019re not good at parsing information, and that\u2019s troubling. Our experience of an \u201cinformation society\u201d may have evolved, but the skills Dr. Kuhlthau spoke of are even more critical now: our lives depend on information literacy.\nPatterns from words\nLet\u2019s start at the beginning: with the words. Our choice of words can drastically alter a message, from its emotional resonance to its context to its literal meaning. Sometimes we can use word choice for good, to reinvigorate old, forgotten, or unfairly besmirched ideas.\nOne time at a wedding bbq we labeled the coleslaw BRASSICA MIXTA so people wouldn\u2019t skip it based on false hatred.\u2014 Eileen Webb (@webmeadow) November 27, 2016\n\nWe can also use clever word choice to build euphemisms, to name sensitive or intimate concepts without conjuring their full details. This trick gifts us with language like \u201cthe beast with two backs\u201d (thanks, Shakespeare!) and \u201csurfing the crimson wave\u201d (thanks, Cher Horowitz!).\nBut when we grapple with more serious concepts\u2014war, death, human rights\u2014this habit of declawing our language gets dangerous. Using more discrete wording serves to nullify the concepts themselves, euphemizing them out of sight and out of mind.\nThe result? Politicians never lie, they just \u201cmisspeak.\u201d Nobody\u2019s racist, but plenty of people are \u201ceconomically anxious.\u201d Nazis have rebranded as \u201calt-right.\u201d \nI\u2019m not an asshole, I\u2019m just alt-nice.\u2014 Andi Zeisler (@andizeisler) November 22, 2016\n\nThe problem with euphemisms like these is that they quickly infect everyday language. We use the words we hear around us. The more often we see \u201calt-right\u201d instead of \u201cNazi,\u201d the more likely we are to use that phrase ourselves\u2014normalizing the term as well as the terrible ideas behind it.\nPatterns from sentences\nThat process of normalization gets a boost from the media, our main vector of information about the world outside ourselves. Headlines control how we interpret the news that follows\u2014even if the story contradicts it in the end. We hear the framing more clearly than the content itself, coloring our interpretation of the news over time.\nEven worse, headlines are often written to encourage clicks, not to convey critical information. When headline-writing is driven by sensationalism, it\u2019s much, much easier to build a pattern of misinformation. Take this CBS News headline: \u201cDonald Trump: \u2018Millions\u2019 voted illegally for Hillary Clinton.\u201d The headline makes no indication that this an objectively false statement; instead, this word choice subtly suggestions that millions did, in fact, illegally vote for Hillary Clinton.\nHeadlines like this are what make lying a worthwhile political strategy. https://t.co/DRjGeYVKmW\u2014 Binyamin Appelbaum (@BCAppelbaum) November 27, 2016\n\nThis is a deeply dangerous choice of words when headlines are the primary way that news is conveyed\u2014especially on social media, where it\u2019s much faster to share than to actually read the article. In fact, according to a study from the Media Insight Project, \u201croughly six in 10 people acknowledge that they have done nothing more than read news headlines in the past week.\u201d \nIf a powerful person asserts X there are 2 responsible ways to cover:1. \u201cX is true\u201d2. \u201cPerson incorrectly thinks X\u201dNever \u201cPerson says X\u201d\u2014 Helen Rosner (@hels) November 27, 2016\n\nEven if we do, in fact, read the whole article, there\u2019s no guarantee that we\u2019re thinking critically about it. A study conducted by Stanford found that \u201c82 percent of students could not distinguish between a sponsored post and an actual news article on the same website. Nearly 70 percent of middle schoolers thought they had no reason to distrust a sponsored finance article written by the CEO of a bank, and many students evaluate the trustworthiness of tweets based on their level of detail and the size of attached photos.\u201d \nFriends: our information literacy is not very good. Luckily, we\u2014workers of the web\u2014are in a position to improve it.\nSentences into design\nConsider the presentation of those all-important headlines in social media cards, as on Facebook. The display is a combination of both the card\u2019s design and the article\u2019s source code, and looks something like this:\n\nA large image, a large headline; perhaps a brief description; and, at the bottom, in pale gray, a source and an author\u2019s name. \nThose choices convey certain values: specifically, they suggest that the headline and the picture are the entire point. The source is so deemphasized that it\u2019s easy to see how fake news gains a foothold: daily exposure to this kind of hierarchy has taught us that sources aren\u2019t important. \nAnd that\u2019s the message from the best-case scenario. Not every article shows every piece of data. Take this headline from the BBC: \u201cWisconsin receives request for vote recount.\u201d \n\nWith no image, no description, and no author, there\u2019s little opportunity to signal trust or provide nuance. There\u2019s also no date\u2014ever\u2014which presents potentially misleading complications, especially in the context of \u201cbreaking news.\u201d \nAnd lest you think dates don\u2019t matter in the light-speed era of social media, take the headline, \u201cMaryland sidesteps electoral college.\u201d Shared into my feed two days after the US presidential election, that\u2019s some serious news with major historical implications. But since there\u2019s no date on this card, there\u2019s no way for readers to know that the \u201cTuesday\u201d it refers to was in 2007. Again, a design choice has made misinformation far too contagious.\n\nMore recently, I posted my personal reaction to the death of Fidel Castro via a series of twenty tweets. Wanting to share my thoughts with friends and family who don\u2019t use Twitter, I then posted the first tweet to Facebook. The card it generated was less than ideal:\n\nThe information hierarchy created by this approach prioritizes the name of the Twitter user (not even the handle), along with the avatar. Not only does that create an awkward \u201cheadline\u201d (at least when you include a full stop in your name), but it also minimizes the content of the tweet itself\u2014which was the whole point. \nThe arbitrary elevation of some pieces of content over others\u2014like huge headlines juxtaposed with minimized sources\u2014teaches readers that these values are inherent to the content itself: that the headline is the news, that the source is irrelevant. We train readers to stop looking for the information we don\u2019t put in front of them. \nThese aren\u2019t life-or-death scenarios; they are just cases where design decisions noticeably dictate the perception of information. Not every design decision makes so obvious an impact, but the impact is there. Every single action adds to the pattern.\nDesign with intention\nWe can\u2019t necessarily teach people to read critically or vet their sources or stop believing conspiracy theories (or start believing facts). Our reach is limited to our roles: we make websites and products for companies and colleges and startups.\nBut we have more reach there than we might realize. Every decision we make influences how information is presented in the world. Every presentation adds to the pattern. No matter how innocuous our organization, how lowly our title, how small our user base\u2014every single one of us contributes, a little bit, to the way information is perceived.\nAre we changing it for the better?\nWhile it\u2019s always been crucial to act ethically in the building of the web, our cultural climate now requires dedicated, individual conscientiousness. It\u2019s not enough to think ourselves neutral, to dismiss our work as meaningless or apolitical. Everything is political. Every action, and every inaction, has an impact.\nAs Chappell Ellison put it much more eloquently than I can:\nEvery single action and decision a designer commits is a political act. The question is, are you a conscious actor?\u2014 Chappell Ellison\ud83e\udd14 (@ChappellTracker) November 28, 2016\n\nAs shapers of information, we have a responsibility: to create clarity, to further understanding, to advance truth. Every single one of us must choose to treat information\u2014and the society it builds\u2014with integrity.", "year": "2016", "author": "Lisa Maria Martin", "author_slug": "lisamariamartin", "published": "2016-12-14T00:00:00+00:00", "url": "https://24ways.org/2016/information-literacy-is-a-design-problem/", "topic": "content"} {"rowid": 290, "title": "Creating a Weekly Research Cadence", "contents": "Working on a product team, it\u2019s easy to get hyper-focused on building features and lose sight of your users and their daily challenges. User research can be time-consuming to set up, so it often becomes ad-hoc and irregular, only performed in response to a particular question or concern. But without frequent touch points and opportunities for discovery, your product will stagnate and become less and less relevant. Setting up an efficient cadence of weekly research conversations will re-focus your team on user problems and provide a steady stream of insights for product development.\nAs my team transitioned into a Lean process earlier this year, we needed a way to get more feedback from users in a short amount of time. Our users are internet marketers\u2014always busy and often difficult to reach. Scheduling research took days of emailing back and forth to find mutually agreeable times, and juggling one-off conversations made it difficult to connect with more than one or two people per week. The slow pace of research was allowing additional risk to creep into our product development.\nI wanted to find a way for our team to test ideas and validate assumptions sooner and more often\u2014but without increasing the administrative burden of scheduling. The solution: creating a regular cadence of research and testing that required a minimum of effort to coordinate. \nSetting up a weekly user research cadence accelerated our learning and built momentum behind strategic experiments. By dedicating time every week to talk to a few users, we made ongoing research a painless part of every weekly sprint. But increasing the frequency of our research had other benefits as well. With only five working days between sessions, a weekly cadence forced us to keep our work small and iterative. Committing to testing something every week meant showing work earlier and more often than we might have preferred\u2014pushing us out of your comfort zone into a process of more rapid experimentation.\nBest of all, frequent conversations with users helped us become more customer-focused. After just a few weeks in a consistent research cadence, I noticed user feedback weaving itself through our planning and strategy sessions. Comments like \u201cRemember what Jenna said last week, about not being able to customize her lists?\u201d would pop up as frequent reference points to guide our decisions. As discussions become less about subjective opinions and more about responding to user needs, we saw immediate improvement in the quality of our solutions.\nEstablishing an efficient recruitment process\nThe key to creating a regular cadence of ongoing user research is an efficient recruitment and scheduling process\u2014along with a commitment to prioritize the time needed for research conversations. This is an invaluable tool for product teams (whether or not they follow a Lean process), but could easily be adapted for content strategy teams, agency teams, a UX team of one, or any other project that would benefit from short, frequent conversations with users. \nThe process I use requires a few hours of setup time at the beginning, but pays off in better learning and better releases over the long run. Almost any team could use this as a starting point and adapt it to their own needs.\nPick a dedicated time each week for research\nIn order to make research a priority, we started by choosing a time each week when everyone on the product team was available. Between stand-ups, grooming sessions, and roadmap reviews, it wasn\u2019t easy to do! Nevertheless, it\u2019s important to include as many people as possible in conversations with your users. Getting a second-hand summary of research results doesn\u2019t have the same impact as hearing someone describe their frustrations and concerns first-hand. The more people in the room to hear those concerns, the more likely they are to become priorities for your team.\nI blocked off 2 hours for research conversations every Thursday afternoon. We make this time sacred, and never schedule other meetings or work across those hours. \nDivide your time into several research slots\nAfter my weekly cadence was set, I divided the time into four 20-minute time slots. Twenty minutes is long enough for us to ask several open-ended questions or get feedback on a prototype, without being a burden on our users\u2019 busy schedules. Depending on your work, you may need schedule longer sessions\u2014but beware the urge to create blocks that last an hour or more. A weekly research cadence is designed to facilitate rapid, ongoing feedback and testing; it should force you to talk to users often and to keep your work small and iterative. Projects that require longer, more in-depth testing will probably need a dedicated research project of their own.\nI used the scheduling software Calendly to create interview appointments on a calendar that I can share with users, and customized the confirmation and reminder emails with information about how to access our video conferencing software. (Most of our research is done remotely, but this could be set up with details for in-person meetings as well.) Automating these emails and reminders took a little bit of time to set up, but was worth it for how much faster it made the process overall.\n\n\nInvite users to sign up for a time that\u2019s convenient for them\nWith a calendar set up and follow-up emails automated, it becomes incredibly easy to schedule research conversations. Each week, I send a short email out to a small group of users inviting them to participate, explaining that this is a chance to provide feedback that will improve our product or occasionally promoting the opportunity to get a sneak peek at new features we\u2019re working on. The email includes a link to the Calendly appointments, allowing users who are interested to opt in to a time that fits their schedule.\nSetting up appointments the first go around involved a bit of educated guessing. How many invitations would it take to fill all four of my weekly slots? How far in advance did I need to recruit users? But after a few weeks of trial and error, I found that sending 12-16 invitations usually allows me to fill all four interview slots. Our users often have meetings pop up at short notice, so we get the best results when I send the recruiting email on Tuesday, two days before my research block.\nIt may take a bit of experimentation to fine tune your process, but it\u2019s worth the effort to get it right. (The worst thing that\u2019s happened since I began recruiting this way was receiving emails from users complaining that there were no open slots available!) I can now fill most of an afternoon with back-to-back user research sessions just by sending just one or two emails each week, increasing our research pace while leaving plenty time to focus on discovery and design.\nGetting the most out of your research sessions\nAs you get comfortable with the rhythm of talking to users each week, you\u2019ll find more and more ways to get value out of your conversations. At first, you may prefer to just show work in progress\u2014such as mockups or a simple prototype\u2014and ask open-ended questions to measure user reaction. When you begin new projects, you may want to use this time to research behavior on existing features\u2014either watching participants as they use part of your product or asking them to give an account of a recent experience in your app. You may even want to run more abstracted Lean experiments, if that\u2019s the best way to validate the assumptions your team is working from.\nWhatever you do, plan some time a day or two later to come back together and review what you\u2019ve learned each week. Synthesizing research outcomes as a group will help keep your team in alignment and allow each person to highlight what they took away from each conversation. \nOver time, you may find that the pace of weekly user research becomes more exhausting than energizing, especially if the responsibility for scheduling and planning falls on just one person. Don\u2019t allow yourself to get burned out; a healthy research cadence should also include time to rest and reflect if the pace becomes too rapid to sustain. Take breaks as needed, then pick up the pace again as soon as you\u2019re ready.", "year": "2016", "author": "Wren Lanier", "author_slug": "wrenlanier", "published": "2016-12-02T00:00:00+00:00", "url": "https://24ways.org/2016/creating-a-weekly-research-cadence/", "topic": "ux"} {"rowid": 289, "title": "Front-End Developers Are Information Architects Too", "contents": "The theme of this year\u2019s World IA Day was \u201cInformation Everywhere, Architects Everywhere\u201d. This article isn\u2019t about what you may consider an information architect to be: someone in the user-experience field, who maybe studied library science, and who talks about taxonomies. This is about a realisation I had a couple of years ago when I started to run an increasing amount of usability-testing sessions with people who have disabilities: that the structure, labelling, and connections that can be made in front-end code is information architecture. People\u2019s ability to be successful online is unequivocally connected to the quality of the code that is written.\nPlaces made of information\nIn information architecture we talk about creating places made of information. These places are made of ones and zeros, but we talk about them as physical structures. We talk about going onto a social media platform, posting in blogs, getting locked out of an environment, and building applications. In 2002, Andrew Hinton stated:\n\nPeople live and work in these structures, just as they live and work in their homes, offices, factories and malls. These places are not virtual: they are as real as our own minds.\n25 Theses\n\nWe\u2019re creating structures which people rely on for significant parts of their lives, so it\u2019s critical that we carry out our work responsibly. This means we must use our construction materials correctly. Luckily, our most important material, HTML, has a well-documented specification which tells us how to build robust and accessible places. What is most important, I believe, is to understand the semantics of HTML.\nSemantics\nThe word \u201csemantic\u201d has its origin in Greek words meaning \u201csignificant\u201d, \u201csignify\u201d, and \u201csign\u201d. In the physical world, a structure can have semantic qualities that tell us something about it. For example, the stunning Westminster Abbey inspires awe and signifies much about the intent and purpose of the structure. The building\u2019s size; the quality of the stone work; the massive, detailed stained glass: these are all signs that this is a building meant for something the creators deemed important. Alternatively consider a set of large, clean, well-positioned, well-lit doors on the ground floor of an office block: they don\u2019t need an \u201centrance\u201d sign to communicate their use and to stop people trying to use a nearby fire exit to get into the building. The design of the doors signify their usage. Sometimes a more literal and less awe-inspiring approach to communicating a building\u2019s purpose happens, but the affect is similar: the building is signifying something about its purpose.\nHTML has over 115 elements, many of which have semantics to signify structure and affordance to people, browsers, and assistive technology. The HTML 5.1 specification mentions semantics, stating:\n\nElements, attributes, and attribute values in HTML are defined \u2026 to have certain meanings (semantics). For example, the
      element represents an ordered list, and the lang attribute represents the language of the content.\nHTML 5.1 Semantics, structure, and APIs of HTML documents\n\nHTML\u2019s baked-in semantics means that developers can architect their code to signify structure, create relationships between elements, and label content so people can understand what they\u2019re interacting with. Structuring and labelling information to make it available, usable, and understandable to people is what an information architect does. It\u2019s also what a front-end developer does, whether they realise it or not.\nA brief introduction to information architecture\nWe\u2019re going to start by looking at what an information architect is. There are many definitions, and I\u2019m going to quote Richard Saul Wurman, who is widely regarded as the father of information architecture. In 1976 he said an information architect is:\n\nthe individual who organizes the patterns inherent in data, making the complex clear; a person who creates the structure or map of information which allows others to find their personal paths to knowledge; the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding, and the science of the organization of information.\nOf Patterns And Structures\n\nTo me, this clearly defines any developer who creates code that a browser, or other user agent (for example, a screen reader), uses to create a structured, navigable place for people.\nJust as there are many definitions of what an information architect is, there are for information architecture itself. I\u2019m going to use the definition from the fourth edition of Information Architecture For The World Wide Web, in which the authors define it as:\nThe structural design of shared information environments.\nThe synthesis of organization, labeling, search, and navigation systems within digital, physical, and cross-channel ecosystems.\nThe art and science of shaping information products and experiences to support usability, findability, and understanding.\nInformation Architecture For The World Wide Web, 4th Edition\nTo me, this describes front-end development. Done properly, there is an art to creating robust, accessible, usable, and findable spaces that delight all our users. For example, at 2015\u2019s State Of The Browser conference, Edd Sowden talked about the accessibility of
    s. He discovered that by simply not using the semantically-correct
    element to mark up headings, in some situations browsers will decide that a
    is being used for layout and essentially make it invisible to assistive technology. Another example of how coding practices can affect the usability and findability of content is shown by L\u00e9onie Watson in her How ARIA landmark roles help screen reader users video. By using ARIA landmark roles, people who use screen readers are quickly able to identify and jump to common parts of a web page.\nOur definitions of information architects and information architecture mention patterns, rules, organisation, labelling, structure, and relationships. There are numerous different models for how these elements get boiled down to their fundamentals. In his Understanding Context book, Andrew Hinton calls them Labels, Relationships, and Rules; Jorge Arango calls them Links, Nodes, And Order; and Dan Klyn uses Ontology, Taxonomy, and Choreography, which is the one we\u2019re going to use. Dan defines these terms as:\nOntology\nThe definition and articulation of the rules and patterns that govern the meaning of what we intend to communicate.\nWhat we mean when we say what we say.\nTaxonomy\nThe arrangements of the parts. Developing systems and structures for what everything\u2019s called, where everything\u2019s sorted, and the relationships between labels and categories\nChoreography\nRules for interaction among the parts. The structures it creates foster specific types of movement and interaction; anticipating the way users and information want to flow and making affordance for change over time.\n\nWe now have definitions of an information architect, information architecture, and a model of the elements of information architecture. But is writing HTML really creating information or is it just wrangling data and metadata? When does data turn into information? In his book Managing For The Future Peter Drucker states:\n\n\u2026 data is not information. Information is data endowed with relevance and purpose.\nManaging For The Future\n\nIf we use the correct semantic element to mark up content then we\u2019re developing with purpose and creating relevance. For example, if we follow the advice of the HTML 5.1 specification and mark up headings using heading rank instead of the outline algorithm, we\u2019re creating a structure where the depth of one heading is relevant to the previous one. Architected correctly, an

    element should be relevant to its parent, which should be the

    . By following the HTML specification we can create a structured, searchable, labeled document that will hopefully be relevant to what our users need to be successful. If you\u2019ve never used a screen reader, you might be wondering how the headings on a page are searchable. Screen readers give users the ability to interact with headings in a couple of ways:\n\nby creating a list of headings so users can quickly scan the page for information\nby using a keyboard command to cycle through one heading at a time\n\nIf we had a document for Christmas Day TV we might structure it something like this:\n

    Christmas Day TV schedule

    \n

    BBC1

    \n

    Morning

    \n

    Evening

    \n

    BBC2

    \n

    Morning

    \n

    Evening

    \n

    ITV

    \n

    Morning

    \n

    Evening

    \n

    Channel 4

    \n

    Morning

    \n

    Evening

    \nIf I use VoiceOver to generate a list of headings, I get this:\n\nOnce I have that list I can use keyboard commands to filter the list based on the heading level. For example, I can press 2 to hear just the

    s:\n\nIf we hadn\u2019t used headings, of if we\u2019d nested them incorrectly, our users would be frustrated.\nPutting this together\nLet\u2019s put this together with an example of a button that, when pressed, toggles the appearance of a panel of links. There are numerous ways we could create a button on a web page, but the best way is to just use a \n\n
    \n \n
    \nThere\u2019s quite a bit going on here. We\u2019re using the:\n\naria-controls attribute to architect a connection between the

    ) and wait for events to bubble up from its children. The way to do this using the .on() method requires only one change from our code above:\n\n$('table').on('click','td',function() {\n $(this).toggleClass('active');\n});\n\nAll we\u2019ve done is moved the td selector to an argument inside the .on() method. Providing a selector to .on() switches it into delegation mode, and the event is only fired for descendants of the bound element (table) that match the selector (td). With that one simple change, we\u2019ve gone from having to bind one hundred event listeners to just one. You might think that the browser having to do one hundred times less work would be a good thing and you\u2019d be completely right. The difference between the two examples above is staggering.\n\n(Note that if your site is using a version of jQuery earlier than 1.7, you can accomplish the very same thing using the .delegate() method. The syntax of how you write the function differs slightly; if you\u2019ve never used it before, it\u2019s worth checking the API docs for that page to see how it works.)\n\nDOM manipulation\n\njQuery makes it very easy to manipulate the DOM. It\u2019s trivial to create new nodes, insert them, remove other ones, move things around, and so on. While the code to do this is simple to write, every time the DOM is manipulated, the browser has to repaint and reflow content which can be extremely costly. This is no more evident than in a long loop, whether it be a standard for() loop, while() loop, or jQuery $.each() loop.\n\nIn this case, let\u2019s say we\u2019ve just received an array full of image URLs from a database or Ajax call or wherever, and we want to put all of those images in an unordered list. Commonly, you\u2019ll see code like this to pull this off:\n\nvar arr = [reallyLongArrayOfImageURLs]; \n $.each(arr, function(count, item) {\n var newImg = '
  • ';\n $('#imgList').append(newImg);\n });\n\nThere are a couple of problems with this. For one (which you should have already noticed if you\u2019ve read the earlier part of this article), we\u2019re making the $(\"#imgList\") selection once for each iteration of our loop. The other problem here is that each time the loop iterates, it\u2019s adding a new
  • to the DOM. Each of those insertions is going to be costly, and if our array is quite large then this could lead to a massive slowdown or even the dreaded \u2018A script is causing this page to run slowly\u2019 warning.\n\nvar arr = [reallyLongArrayOfImageURLs],\n tmp = ''; \n$.each(arr, function(count, item) {\n tmp += '
  • ';\n});\n$('#imgList').append(tmp);\n\nAll we\u2019ve done here is create a tmp variable that each
  • is added to as it\u2019s created. Once our loop has finished iterating, that tmp variable will contain all of our list items in memory, and can be appended to our
      all in one go. Browsers work much faster when working with objects in memory rather than on screen, so this is a much faster, more CPU-cycle-friendly method of building a list.\n\nWrapping up\n\nThese are far from being the only ways to make your jQuery code run better, but they are among the simplest ones to implement. Though each individual change may only make a few milliseconds of difference, it doesn\u2019t take long for those milliseconds to add up. Studies have shown that the human eye can discern delays of as few as 100ms, so simply making a few changes sprinkled throughout your code can very easily have a noticeable effect on how well your website or app performs. Do you have other jQuery optimization tips to share? Leave them in the comments and help make us all better.\n\nNow go forth and make awesome!", "year": "2011", "author": "Scott Kosman", "author_slug": "scottkosman", "published": "2011-12-13T00:00:00+00:00", "url": "https://24ways.org/2011/your-jquery-now-with-less-suck/", "topic": "code"} {"rowid": 275, "title": "Context First: Web Strategy in Four Handy Ws", "contents": "Many, many years ago, before web design became my proper job, I trained and worked as a journalist. I studied publishing in London and spent three fun years learning how to take a few little nuggets of information and turn them into a story. I learned a bunch of stuff that has all been a huge help to my design career. Flatplanning, layout, typographic theory. All of these disciplines have since translated really well to web design, but without doubt the most useful thing I learned was how to ask difficult questions.\n\nPretty much from day one of journalism school they hammer into you the importance of the Five Ws. Five disarmingly simple lines of enquiry that eloquently manage to provide the meat of any decent story. And with alliteration thrown in too. For a young journo, it\u2019s almost too good to be true.\n\nWho? What? Where? When? Why? It seems so obvious to almost be trite but, fundamentally, any story that manages to answer those questions for the reader is doing a pretty good job. You\u2019ll probably have noticed feeling underwhelmed by certain news pieces in the past \u2013 disappointed, like something was missing. Some irritating oversight that really lets the story down. No doubt it was one of the Ws \u2013 those innocuous little suckers are generally only noticeable by their absence, but they sure get missed when they\u2019re not there. \n\nQuestion everything\n\nI\u2019ve always been curious. An inveterate tinkerer with things and asker of dopey questions, often to the point of abject annoyance for anyone unfortunate enough to have ended up in my line of fire. So, naturally, the Five Ws started drifting into other areas of my life. I\u2019d scrutinize everything, trying to justify or explain my rationale using these Ws, but I\u2019d also find myself ripping apart the stuff that clearly couldn\u2019t justify itself against the same criteria.\n\nSo when I started working as a designer I applied the same logic and, sure enough, the Ws pretty much mapped to the exact same needs we had for gathering requirements at the start of a project. It seemed so obvious, such a simple way to establish the purpose of a product. What was it for? Why we were making it? And, of course, who were we making it for? It forced clients to stop and think, when really what they wanted was to get going and see something shiny. Sometimes that was a tricky conversation to have, but it\u2019s no coincidence that those who got it also understood the value of strategy and went on to have good solid products, while those that didn\u2019t often ended up with arrogantly insular and very shiny but ultimately unsatisfying and expendable products. Empty vessels make the most noise and all that\u2026\n\nContent first\n\nI was both surprised and pleased when the whole content first idea started to rear its head a couple of years back. Pleased, because without doubt it\u2019s absolutely the right way to work. And surprised, because personally it\u2019s always been the way I\u2019ve done it \u2013 I wasn\u2019t aware there was even an alternative way. Content in some form or another is the whole reason we were making the things we were making. I can\u2019t even imagine how you\u2019d start figuring out what a site needs to do, how it should be structured, or how it should look without a really good idea of what that content might be. It baffles me still that this was somehow news to a lot of people. What on earth were they doing? Design without purpose is just folly, surely?\n\nIt\u2019s great to see the idea gaining momentum but, having watched it unfold, it occurred to me recently that although it\u2019s fantastic to see a tangible shift in thinking \u2013 away from those bleak times, where making things up was somehow deemed an appropriate way to do things \u2013 there\u2019s now a new bad guy in town.\n\nWith any buzzword solution of the moment, there\u2019s always a catch, and it seems like some have taken the content first approach a little too literally. By which I mean, it\u2019s literally the first thing they do. The project starts, there\u2019s a very cursory nod towards gathering requirements, and off they go, cranking content. Writing copy, making video, commissioning illustrations.\n\nAll that\u2019s happened is that the \u2018making stuff up\u2019 part has shifted along the line, away from layout and UI, back to the content. \n\nStarting is too easy\n\nI can\u2019t remember where I first heard that phrase, but it\u2019s a great sentiment which applies to so much of what we do on the web. The medium is so accessible and to an extent disposable; throwing things together quickly carries far less burden than in any other industry. We\u2019re used to tweaking as we go, changing bits, iterating things into shape. The ubiquitous beta tag has become the ultimate caveat, and has made the unfinished and unpolished acceptable. Of course, that can work brilliantly in some circumstances. Occasionally, a product offers such a paradigm shift it\u2019s beyond the level of deep planning and prelaunch finessing we\u2019d ideally like. But, in the main, for most client sites we work on, there really is no excuse not to do things properly. To ask the tricky questions, to challenge preconceptions and really understand the Ws behind the products we\u2019re making before we even start. \n\nThe four Ws\n\nFor product definition, only four of the five Ws really apply, although there\u2019s a lot of discussion around the idea of when being an influencing factor. For example, the context of a user\u2019s engagement with your product is something you can make a call on depending on the specifics of the project.\n\nSo, here\u2019s my take on the four essential Ws. I\u2019ll point out here that, of course, these are not intended to be autocratic dictums. Your needs may differ, your clients\u2019 needs may differ, but these four starting points will get you pretty close to where you need to be.\n\nWho \n\nIt\u2019s surprising just how many projects start without a real understanding of the intended audience. Many clients think they have an idea, but without really knowing \u2013 it\u2019s presumptive at best, and we all know what presumption is the mother of, right? Of course, we can\u2019t know our audiences in the same way a small shop owner might know their customers. But we can at least strive to find out what type of people are likely to be using the product. I\u2019m not talking about deep user research. That should come later.\n\nThese are the absolute basics. What\u2019s the context for their visit? How informed are they? What\u2019s their level of comprehension? Are they able to self-identify and relate to categories you have created? I could go on, and it changes on a per-project basis. You\u2019ll only find this out by speaking to them, if not in person, then indirectly through surveys, questionnaires or polls. The mechanism is less important than actually reaching out and engaging with them, because without that understanding it\u2019s impossible to start to design with any empathy.\n\nWhat\n\nOnce you become deeply involved directly with a product or service, it\u2019s notoriously difficult to see things as an outsider would. You learn the thing inside and out, you develop shortcuts and internal phraseology. Colloquialisms creep in. You become too close. So it\u2019s no surprise when clients sometimes struggle to explain what it is their product actually does in a way that others can understand.\n\nOften products are complex but, really, the core reasons behind someone wanting to use that product are very simple. There\u2019s a value proposition for the customer and, if they choose to engage with it, there\u2019s a value exchange. If that proposition or exchange isn\u2019t transparent, then people become confused and will likely go elsewhere. Make sure both your client and you really understand what that proposition is and, in turn, what the expected exchange should be. In a nutshell: what is the intended outcome of that engagement? Often the best way to do this is strip everything back to nothing. Verbosity is rife on the web. Just because it\u2019s easy to create content, that shouldn\u2019t be a reason to do so. Figure out what the value proposition is and then reintroduce content elements that genuinely help explain or present that to a level that is appropriate for the audience. \n\nWhy \n\nIn advertising, they talk about the truths behind a product or service. Truths can be both tangible or abstract, but the most important part is the resonance those truths hit with a customer. In a digital product or service those truths are often exposed as benefits. Why is this what I need? Why will it work for me? Why should I trust you? The why is one of the more fluffy Ws, yet it\u2019s such an important one to nail. Clients can get prickly when you ask them to justify the why behind their product, but it\u2019s a fantastic way to make sure the value proposition is clear, realistic and meets with the expectations of both client and customer.\n\nIt\u2019s our job as designers to question things: we\u2019re not just a pair of hands for clients. Just recently I spoke to a potential client about a site for his business. I asked him why people would use his product and also why his product seemed so fractured in its direction. He couldnt answer that question so, instead of ploughing on regardless, he went back to his directors and is now re-evaluating that business. It was awkward but he thanked me and hopefully he\u2019ll have a better product as a result.\n\nWhere\n\nIn this instance, where is not so much a geographical thing, although in some cases that level of context may indeed become a influencing factor\u2026 The where we\u2019re talking about here is the position of the product in relation to others around it. By looking at competitors or similar services around the one you are designing, you can start to get a sense for many of the things that are otherwise hard to pin down or have yet to be defined. For example, in a collection of sites all selling cars, where does yours fit most closely? Where are the overlaps? How are they communicating to their customers? How is the product range presented or categorized?\n\nIt\u2019s good to look around and see how others are doing it. Not in a quest for homogeneity but more to reference or to avoid certain patterns that may or may not make sense for your own particular product. Clients often strive to be different for the sake of it. They feel they need to provide distinction by going against the flow a bit. We know different. We know users love convention. They embrace familiar mental models. They\u2019re comfortable with things that they\u2019ve experienced elsewhere. By showing your client that position is a vital part of their strategy, you can help shape their product into something great. \n\nTo conclude\n\nSo there we have it \u2013 the four Ws. Each part tells a different and vital part of the story you need to be able to make a really good product. It might sound like a lot of work, particularly when the client is breathing down your neck expecting to see things, but without those pieces in place, the story you\u2019re building your product on, and the content that you\u2019re creating to form that product can only ever fit into one genre. Fiction.", "year": "2011", "author": "Alex Morris", "author_slug": "alexmorris", "published": "2011-12-10T00:00:00+00:00", "url": "https://24ways.org/2011/context-first/", "topic": "content"} {"rowid": 274, "title": "Adaptive Images for Responsive Designs", "contents": "So you\u2019ve been building some responsive designs and you\u2019ve been working through your checklist of things to do:\n\n\n\tYou started with the content and designed around it, with mobile in mind first.\n\tYou\u2019ve gone liquid and there\u2019s nary a px value in sight; % is your weapon of choice now.\n\tYou\u2019ve baked in a few media queries to adapt your layout and tweak your design at different window widths.\n\tYou\u2019ve made your images scale to the container width using the fluid Image technique.\n\tYou\u2019ve even done the same for your videos using a nifty bit of JavaScript.\n\n\nYou\u2019ve done a good job so pat yourself on the back. But there\u2019s still a problem and it\u2019s as tricky as it is important: image resolutions.\n\nHTML has an problem\n\nCSS is great at adapting a website design to different window sizes \u2013 it allows you not only to tweak layout but also to send rescaled versions of the design\u2019s images. And you want to do that because, after all, a smartphone does not need a 1,900-pixel background image1.\n\nHTML is less great. In the same way that you don\u2019t want CSS background images to be larger than required, you don\u2019t want that happening with s either. A smartphone only needs a small image but desktop users need a large one. Unfortunately s can\u2019t adapt like CSS, so what do we do?\n\nWell, you could just use a high resolution image and the fluid image technique would scale it down to fit the viewport; but that\u2019s sending an image five or six times the file size that\u2019s really needed, which makes it slow to download and unpleasant to use. Smartphones are pretty impressive devices \u2013 my ancient iPhone 3G is more powerful in every way than my first proper computer \u2013 but they\u2019re still terribly slow in comparison to today\u2019s desktop machines. Sending a massive image means it has to be manipulated in memory and redrawn as you scroll. You\u2019ll find phones rapidly run out of RAM and slow to a crawl.\n\nWell, OK. You went mobile first with everything else so why not put in mobile resolution s too? Because even though mobile devices are rapidly gaining share in your analytics stats, they\u2019re still not likely to be the major share of your user base. I don\u2019t think desktop users would be happy with pokey little mobile resolution images, do you? What we need are adaptive images.\n\nAdaptive image techniques\n\nThere are a number of possible solutions, each with pros and cons, and it\u2019s not as simple to find a graceful solution as you might expect.\n\nYour first thought might be to use JavaScript to trawl through the markup and rewrite the source attribute. That\u2019ll get you the right end result, but it\u2019ll have done it in a way you absolutely don\u2019t want. That\u2019s because of the way browsers load resources. It starts to load the HTML and builds the page on-the-fly; as soon as it finds an element it immediately asks the server for that image. After the HTML has finished loading, the JavaScript will run, change the src attribute, and then the browser will request that new image too. Not instead of, but as well as. Not good: that\u2019s added more bloat instead of cutting it.\n\nPlain JavaScript is out then, which is a problem, because what other tools do we have to work with as web designers? Let\u2019s ignore that for now and I\u2019ll outline another issue with the concept of serving different resolution images for different window widths: a basic file management problem. To request a different image, that image has to exist on the server. How\u2019s it going to get there? That\u2019s not a trivial problem, especially if you have non-technical users that update content through a CMS. Let\u2019s say you solve that \u2013 do you plan on a simple binary switch: big image|little image? Is that really efficient or future-proof? What happens if you have an archive of existing content that needs to behave this way? Can you apply such a solution to existing content or markup?\n\nThere\u2019s a detailed round-up of potential techniques for solving the adaptive images problem over at the Cloud Four blog if you fancy a dig around exploring all the options currently available. But I\u2019m here to show you what I think is the most flexible and easy to implement solution, so here we are.\n\nAdaptive Images\n\nAdaptive Images aims to mitigate most of the issues surrounding the problems of bringing the venerable tag into the 21st century. And it works by leaving that tag completely alone \u2013 just add that desktop resolution image into the markup as you\u2019ve been doing for years now. We\u2019ll fix it using secret magic techniques and bottled pixie dreams. Well, fine: with one .htaccess file, one small PHP file and one line of JavaScript. But you\u2019re killing the mystique with that kind of talk.\n\nSo, what does this solution do?\n\n\n\tIt allows s to adapt to the same break points you use in your media queries, giving granular control in the same way you get with your CSS.\n\tIt installs on your server in five minutes or less and after that is automatic and you don\u2019t need to do anything.\n\tIt generates its own rescaled images on the server and doesn\u2019t require markup changes, so you can apply it to existing web content.\n\tIf you wish, it will make all of your images go mobile-first (just in case that\u2019s what you want if JavaScript and cookies aren\u2019t available).\n\n\nSound good? I hope so. Here\u2019s what you do.\n\nSetting up and rolling out\n\nI\u2019ll assume you have some basic server knowledge along with that wealth of front-end wisdom exploding out of your head: that you know not to overwrite any existing .htaccess file for example, and how to set file permissions on your server. Feeling up to it? Excellent.\n\n\n\tDownload the latest version of Adaptive Images either from the website or from the GitHub repository.\n\tUpload the included .htaccess and adaptive-images.php files into the root folder of your website.\n\tCreate a directory called ai-cache and make sure the server can write to it (CHMOD 755 should do it).\n\tAdd the following line of JavaScript into the of your site:\n\n\n\n\nThat\u2019s it, unless you want to tweak the default settings. You likely do, but essentially you\u2019re already up and running.\n\nHow it works\n\nAdaptive Images does a number of things depending on the scenario the script has to handle, but here\u2019s a basic overview of what it does when you load a page running it:\n\n\n\tA session cookie is written with the value of the visitor\u2019s screen size in pixels.\n\tThe HTML encounters an tag and sends a request to the server for that image. It also sends the cookie, because that\u2019s how browsers work.\n\tApache sits on the server and receives the request for the image. Apache then has a look in the .htaccess file to see if there are any special instructions for files in the requested URL.\n\tThere are! The .htaccess says \u201cHey, server! Any request you get for a JPG, GIF or PNG file just send to the adaptive-images.php file instead.\u201d\n\tThe PHP file then does some intelligent thinking which can cover a number of scenarios, but I\u2019ll illustrate one path that can happen:\n\n\n\t\n\t\tThe PHP file looks for the cookie and finds out that the user has a maximum screen width of 480px.\n\t\tThe PHP has a look at the available media query sizes that were configured and decides which one matches the user\u2019s device.\n\t\tIt then has a look inside the /ai-cache/480/ folder to see if a rescaled image already exists there.\n\t\tWe\u2019ll pretend it doesn\u2019t \u2013 the PHP then goes to the actual requested URI and finds that the original file does exist.\n\t\tIt has a look to see how wide that image is. If it\u2019s already smaller than the user\u2019s screen width it sends it along and stops there. But, let\u2019s pretend the image is 1,000px wide.\n\t\tThe PHP then resizes the image and saves it into the /ai-cache/480 folder ready for the next time someone needs it.\n\t\n\nIt also does a few other things when needs arise, for example:\n\n\n\tIt sends images with a cache header field that tells proxies not to cache the image, while telling browsers they should. This avoids problems with proxy servers and network caching systems grabbing the wrong image and storing it.\n\tIt handles cases where there isn\u2019t a cookie set, and you can choose whether to then send the mobile version or the largest configured media query size.\n\tIt compares timestamps between the source image and the generated cache image \u2013 to ensure that if the source image gets updated, the old cached file won\u2019t be sent.\n\n\nCustomizing\n\nThere are a few options you can customize if you don\u2019t like the default values. By looking in the PHP\u2019s configuration section at the top of the file, you can:\n\n\n\tSet the resolution breakpoints to match your media query break points.\n\tChange the name and location of the ai-cache folder.\n\tChange the quality level any generated JPG images are saved at.\n\tHave it perform a subtle sharpen on rescaled images to help keep detail.\n\tToggle whether you want it to compare the files in the cache folder with the source ones or not.\n\tSet how long the browser should cache the images for.\n\tSwitch between a mobile-first or desktop-first approach when a cookie isn\u2019t found.\n\n\nMore importantly, you probably want to omit a few folders from the AI behaviour. You don\u2019t need or want it resizing the images you\u2019re using in your CSS, for example. That\u2019s fine \u2013 just open up the .htaccess file and follow the instructions to list any directories you want AI to ignore. Or, if you\u2019re a dab hand at RewriteRules you can remove the exclamation mark at the start of the rule and it\u2019ll only apply AI behaviour to a given list of folders.\n\nCaveats\n\nAs I mentioned, I think this is one of the most flexible, future-proof, retrofittable and easy to use solutions available today. But, there are problems with this approach as there are with all of the ones I\u2019ve seen so far.\n\nThis is a PHP solution\n\nI wish I was smarter and knew some fancy modern languages the cool kids discuss at parties, but I don\u2019t. So, you need PHP on your server. That said, Adaptive Images has a Creative Commons licence2 and I would welcome anyone to contribute a port of the code3. \n\nContent delivery networks\n\nAdaptive Images relies on the server being able to: intercept requests for images; do some logic; and send one of a given number of responses. Content delivery networks are generally dumb caches, and they won\u2019t allow that to happen. Adaptive Images will not work if you\u2019re using a CDN to deliver your website.\n\nA minor but interesting cookie issue.\n\nAs Yoav Weiss pointed out in his article Preloaders, cookies and race conditions, there is no way to guarantee that a cookie will be set before images are requested \u2013 even though the JavaScript that sets the cookie is loaded by the browser before it finds any tags. That could mean images being requested without a cookie being available. Adaptive Images has a two-fold mechanism to avoid this being a problem:\n\n\n\tThe $mobile_first toggle allows you to choose what to send to a browser if a cookie isn\u2019t set. If FALSE then it will send the highest configured resolution; if TRUE it will send the lowest.\n\tEven if set to TRUE, Adaptive Images checks the User Agent String. If it discovers the user is on a desktop environment, it will override $mobile_first and set it to FALSE.\n\n\nThis means that if $mobile_first is set to TRUE and the user was unlucky (their browser didn\u2019t write the cookie fast enough), mobile devices will be supplied with the smallest image, and desktop devices will get the largest.\n\nThe best way to get a cookie written is to use JavaScript as I\u2019ve explained above, because it\u2019s the fastest way. However, for those that want it, there is a JavaScript-free method which uses CSS and a bogus PHP \u2018image\u2019 to set the cookie. A word of caution: because it requests an external file, this method is slower than the JavaScript one, and it is very likely that the cookie won\u2019t be set until after images have been requested.\n\nThe future\n\nFor today, this is a pretty good solution. It works, and as it doesn\u2019t interfere with your markup or source material in any way, the process is non-destructive. If a future solution is superior, you can just remove the Adaptive Images files and you\u2019re good to go \u2013 you\u2019d never know AI had been there.\n\nHowever, this isn\u2019t really a long-term solution, not least because of the intermittent problem of the cookie and image request race condition. What we really need are a number of standardized ways to handle this in the future.\n\nFirst, we could do with browsers sending far more information about the user\u2019s environment along with each HTTP request (device size, connection speed, pixel density, etc.), because the way things work now is no longer fit for purpose. The web now is a much broader entity used on far more diverse devices than when these technologies were dreamed up, and we absolutely require the server to have better knowledge about device capabilities than is currently possible. Relying on cookies to do this job doesn\u2019t cut it, and the User Agent String is a complete mess incapable of fulfilling the various purposes we are forced to hijack it for.\n\nSecondly, we need a W3C-backed markup level solution to supply semantically different content at different resolutions, not just rescaled versions of the same content as Adaptive Images does.\n\nI hope you\u2019ve found this interesting and will find Adaptive Images useful.\n\nFootnotes\n\n1 While I\u2019m talking about preventing smartphones from downloading resources they don\u2019t need: you should be careful of your media query construction if you want to stop WebKit downloading all the images in all of the CSS files.\n\n2 Adaptive Images has a very broad Creative Commons licence and I warmly welcome feedback and community contributions via the GitHub repository. \n\n3 There is a ColdFusion port of an older version of Adaptive Images. I do not have anything to do with ported versions of Adaptive Images.", "year": "2011", "author": "Matt Wilcox", "author_slug": "mattwilcox", "published": "2011-12-04T00:00:00+00:00", "url": "https://24ways.org/2011/adaptive-images-for-responsive-designs/", "topic": "ux"} {"rowid": 273, "title": "There\u2019s No Formula for Great Designs", "contents": "Before he combined them with fluid images and CSS3 media queries to coin responsive design, Ethan Marcotte described fluid grids \u2014 one of the most enjoyable parts of responsive design. Enjoyable that is, if you like working with math(s). But fluid grids aren\u2019t perfect and, unless we\u2019re careful when applying them, they can sometimes result in a design that feels disconnected.\n\nRecapping fluid grids\n\nIf you haven\u2019t read Ethan\u2019s Fluid Grids, now would be a good time to do that. It centres around a simple formula for converting pixel widths to percentages:\n\n(target \u00f7 context) \u00d7 100 = result\n\nHow does that work in practice? Well, take that Fireworks or Photoshop comp you\u2019re working on (I call them static design visuals, or just visuals.) Of course, everything on that visual \u2014 column divisions, inline images, navigation elements, everything \u2014 is measured in pixels. Now:\n\n\n\tPick something in the visual and measure its width. That\u2019s our target.\n\tTake that target measurement and divide it by the width of its parent (context).\n\tMultiply what you\u2019ve got by 100 (shift two decimal places).\n\tWhat you\u2019re left with is a percentage width to drop into your style sheets.\n\n\nFor example, divide this 300px wide sidebar division by its 948px parent and then multiply by 100: your original 300px is neatly converted to 31.646%.\n\n.content-sub {\nwidth : 31.646%; /* 300px \u00f7 948px = .31646 */ }\n\nThat formula makes it surprisingly simple for even die-hard fixed width aficionados to convert their visuals to percentage-based, fluid layouts.\n\nIt\u2019s a handy formula for those who still design using static visuals, and downright essential for those situations where one person in an organization designs in Fireworks or Photoshop and another develops with CSS. Why?\n\nWell, although I think that designing in a browser makes the best sense \u2014 particularly when designing for multiple devices \u2014 I\u2019ll wager most designers still make visuals in Fireworks or Photoshop and use them for demonstrations and get feedback and sign-off. That\u2019s OK. If you haven\u2019t made the transition to content-out designing in a browser yet, the fluid grids formula helps you carry on pushing pixels a while longer.\n\nYou can carry on moving pixel width measurements from your visuals to your style sheets, too, in the same way you always have. You can be precise to the pixel and even apply a grid image as a CSS background to help you keep everything lined up perfectly.\n\nOnce you\u2019re done, and the fixed width layout in the browser matches your visual, loop back through your style sheets and convert those pixels to percentages using the fluid grids formula. With very little extra work, you\u2019ll have a fluid implementation of your fixed width layout.\n\nThe fluid grids formula is simple and incredibly effective, but not long after I started working responsively I realized that the formula shouldn\u2019t (always) be a one-fix, set-and-forget calculation. I noticed that unless we compensate for problems it sometimes creates, the result can be a disconnected design.\n\nStaying connected\n\nGood design relies on connectedness, a feeling of natural balance between elements and the grid they\u2019re placed on. Give an element greater prominence or position in a visual hierarchy and you can fundamentally alter the balance and sometimes the meaning of a design.\n\nDifferent from a browser\u2019s page zooming feature \u2014 where images, text and overall layout change size by the same ratio \u2014 fluid grids flex a layout in response to a window or device width. Columns expand and contract, and within them fluid media (images and videos) can also change size. This can be one of the most impressive demonstrations of responsive design.\n\nBut not every element within a fluid grid can change size along with the window or device width. For example, type size and leading won\u2019t change along with a column\u2019s width.\n\nWhen columns and elements within them change width, all too easily a visual hierarchy can be broken and along with it the relationship between element sizes and the outer window or viewport. This can happen quickly if you make just one set of fluid grid calculations and use those percentages across every screen width, from smartphones through tablets and up to large desktops.\n\nThe answer? Make several sets of fluid grids calculations, each one at a significant window or device width breakpoint. Then apply those new percentages, when needed, to help keep elements in proportion and maintain balance and connectedness. Here\u2019s how I work.\n\nAvoiding disconnection\n\nI\u2019ve never been entirely happy with grid frameworks such as the 960 Grid System, so I start almost every project by creating a custom grid to inform my layout decisions. Here\u2019s a plain version of a grid from a recent project that I\u2019ll use as an illustration.\n\nThis project\u2019s grid comprises 84px columns and 24px gutters. This creates an odd number of columns at common tablet and desktop widths, and allows for 300px fixed width assets \u2014 useful when I need to fit advertising into a desktop layout\u2019s sidebar.\n\n Showing common advertising sizes (Larger image)\n\nFor this project I chose three 320 and Up breakpoints above 320px and, after placing as many columns as would fit those breakpoint widths, I derived three content widths:\n\n\n\t\t\n\t\t\tBreakpoint \n\t\t\tColumns \n\t\t\tContent width \n\t\t\n\t\t\n\t\t\t768px \n\t\t\t 7 \n\t\t\t 732px \n\t\t\n\t\t\n\t\t\t992px \n\t\t\t 9 \n\t\t\t 948px \n\t\t\n\t\t\n\t\t\t1,382px \n\t\t\t 13 \n\t\t\t 1,380px \n\t\t\n\n\nHere\u2019s my grid again, this time with pixel measurements and breakpoints overlaid.\n\n Showing pixel measurements and breakpoints (Larger image)\n\nNow cast your mind back to the fluid grids calculation I made earlier. I divided a 300px element by 948px and arrived at 31.646%. For some elements it\u2019s possible to use that percentage across all screen widths, but others will feel too small in relation to a narrower 768px and too large inside 1,380px.\n\nTo help maintain connectedness, I make a set of fluid grids calculations based on each of the content widths I established earlier. Now I can shift an element\u2019s percentage width up or down when I switch to a new breakpoint and content width. For example:\n\n\n\t300px is 40.984% of 732px\n\t300px is 31.646% of 948px\n\t300px is 21.739% of 1,380px\n\n\nI\u2019ll add all those fluid grid percentages to my grid image and save it for quick reference.\n\n Showing percentages at all breakpoints (Larger image)\n\nThen I can apply those different percentage widths to elements at each breakpoint using CSS3 media queries. For example, that sidebar division again:\n\n/* 732px, 7-column width */\n\n@media only screen and (min-width: 768px) {\n\n .content-sub {\n width : 40.983%; /* 300px \u00f7 732px = .40983 */ }\n\n}\n\n/* 948px, 9-column width */\n@media only screen and (min-width: 992px) {\n\n .content-sub {\n width : 31.645%; /* 300px \u00f7 948px = .31645 */ }\n\n}\n\n/* 1380px, 13-column width */\n@media only screen and (min-width: 1382px) {\n\n .content-sub {\n width : 21.739%; /* 300px \u00f7 1380px = .21739 */ }\n\n}\n\nThe number of changes you make to a layout at different breakpoints will, of course, depend on the specifics of the design you\u2019re working on. Yes, this is additional work, but the result will be a layout that feels better balanced and within which elements remain in harmony with each other while they respond to new screen or device widths.\n\nPutting the design in responsive web design\n\nUntil now, many of the conversations around responsive web design have been about aspects of technical implementation, rather than design. I believe we\u2019re only beginning to understand what\u2019s involved in designing responsively. In future, we\u2019ll likely be making design decisions not just about proportions but also about responsive typography. We\u2019ll also need to learn how to adapt our designs to device characteristics such as touch targets and more.\n\nSometimes we\u2019ll make decisions to improve function, other times because they make a design \u2018feel\u2019 right. You\u2019ll know when you\u2019ve made a right decision. You\u2019ll feel it.\n\nAfter all, there really is no formula for making great designs.", "year": "2011", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2011-12-23T00:00:00+00:00", "url": "https://24ways.org/2011/theres-no-formula-for-great-designs/", "topic": "ux"} {"rowid": 272, "title": "Crafting the Front-end", "contents": "Much has been spoken and written recently about the virtues of craftsmanship in the context of web design and development. It seems that we as fabricators of the web are finally tiring of seeking out parallels between ourselves and architects, and are turning instead to the fabled specialist artisans.\n\nIdentifying oneself as a craftsman or craftswoman (let\u2019s just say craftsperson from here onward) will likely be a trend of early 2012. In this pre-emptive strike, I\u2019d like to expound on this movement as I feel it pertains to front-end development, and encourage care and understanding of the true qualities of craftsmanship (craftspersonship).\n\nThe core values\n\nI\u2019ll begin by defining craftspersonship. What distinguishes a craftsperson from a technician? Dictionaries tend to define a craftsperson as one who possesses great skill in a chosen field. The badge of a craftsperson for me, though, is a very special label that should be revered and used sparingly, only where it is truly deserved. A genuine craftsperson encompasses a few other key traits, far beyond raw skill, each of which must be learned and mastered.\n\nA craftsperson has: \n\n\n\tAn appreciation of good work, in both the work of others and their own. And not just good as in \u2018hey, that\u2019s pretty neat\u2019, I mean a goodness like a shining purity \u2013 the kind of good that feels right when you see it.\n\tA belief in quality at every level: every facet of the craftsperson\u2019s product is as crucial as any other, without exception, even those normally hidden from view.\n\tVision: an ability to visualize their path ahead, pre-empting the obstacles that may be encountered to plan a route around them.\n\tA preference for simplicity: an almost Bauhausesque devotion to undecorated functionality, with no unjustifiable parts included.\n\tSincerity: producing work that speaks directly to its purpose with flawless clarity.\n\n\nOnly when you become a custodian of such values in your work can you consider calling yourself a craftsperson. Now let\u2019s take a look at some steps we front-end developers can take on our journey of enlightenment toward craftspersonhood.\n\n Speaking of the craftsman\u2019s journey, be sure to watch out for the video of The Standardistas\u2019 stellar talk at the Build 2011 conference titled The Journey, which should be online sometime soon.\n\nBuilding your own toolbox\n\nMy grandfather was a carpenter and trained as a young apprentice under a master. After observing and practising the many foundation theories, principles and techniques of carpentry, he was tasked with creating his own set of woodworking tools, which he would use and maintain throughout his career. By going through the process of having to create his own tools, he would be connected at the most direct level with every piece of wood he touched, his tools being his own creations and extensions of his own skilled hands. The depth of his knowledge of these tools must have surpassed the intricate as he fathered, used, cleaned and repaired them, day in and day out over many years.\n\nAnd so it should be, ideally, with all crafts. We must understand our tools right down to the most fundamental level. I firmly believe that a level of true craftsmanship cannot be reached while there exists a layer that remains not wholly understood between a creator and his canvas. Of course, our tools as front-end developers are somewhat more complex than those of other crafts \u2013 it may seem reasonable to require that a carpenter create his or her own set of chisels, but somewhat less so to ask a front-end developer to code their own CSS preprocessor, or design their own computer.\n\nHowever, it is still vitally important that you understand how your tools work. This is particularly critical when it comes to things like preprocessors, libraries and frameworks which aim to save you time by automating common processes and functions. For the most part, anything that saves you time is a Good Thing\u2122 but it cannot be stressed enough that using tools like these in earnest should be avoided until you understand exactly what they are doing for you (and, to an extent, how they are doing it). \n\nIn particular, you must understand any drawbacks to using your tools, and any shortcuts they may be taking on your behalf. I\u2019m not suggesting that you steer clear of paid work until you\u2019ve studied each of jQuery\u2019s 9,266 lines of JavaScript source code but, all levity aside, it will further you on your journey to look at interesting or relevant bits of jQuery, and any other libraries you might want to use. Such libraries often directly link to corresponding sections of their source code on sites like GitHub from their official documentation. Better yet, they\u2019re almost always written in high level languages (easy to read), so there\u2019s no excuse not to don your pith helmet and go on something of an exploration. Any kind of tangential learning like this will drive you further toward becoming a true craftsperson, so keep an open mind and always be ready to step out of your comfort zone.\n\nDowntime and tool honing\n\nWith any craft, it is essential to keep your tools in good condition, and a good idea to stay up-to-date with the latest equipment. This is especially true on the web, which, as we like to tell anyone who is still awake more than a minute after asking what it is that we do, advances at a phenomenal pace. A tool or technique that could be considered best practice this week might be the subject of haughty derision in a comment thread within six months.\n\nI have little doubt that you already spend a chunk of time each day keeping up with the latest material from our industry\u2019s finest Interblogs and Twittertubes, but do you honestly put aside time to collect bookmarks and code snippets from things you read into a slowly evolving toolbox? At @media in 2009, Simon Collison delivered a candid talk on his \u2018Ultimate Package\u2019. Those of us who didn\u2019t flee the room anticipating a newfound and unwelcome intimacy with the contents of his trousers were shown how he maintained his own toolkit \u2013 a collection of files and folders all set up and ready to go for a new project. By maintaining a toolkit in this way, he has consistency across projects and a dependable base upon which to learn and improve.\n\nThe assembly and maintenance of such a personalized and familiar toolkit is probably as close as we will get to emulating the tool making stage of more traditional craft trades. Keep a master copy of your toolkit somewhere safe, making copies of it for new projects. When you learn of a way in which part of it can be improved, make changes to the master copy.\n\nSimplicity through modularity\n\nI believe that the user interfaces of all web applications should be thought of as being made up primarily of modular components. Modules in this context are patterns in design that appear repeatedly throughout the app. These can be small collections of elements, like a user profile summary box (profile picture, username, meta data), as well as atomic elements such as headings and list items.\n\nWell-crafted front-end architectures have the ability to support this kind of repeating pattern as modules, with as close to no repetition of CSS (or JavaScript) as possible, and as close to no variations in HTML between instances as possible.\n\nOne of the most fundamental and well known tenets of software engineering is the DRY rule \u2013 don\u2019t repeat yourself. It requires that \u201cevery piece of knowledge must have a single, unambiguous, authoritative representation within a system.\u201d \n\nAs craftspeople, we must hold this rule dear and apply it to the modules we have identified in our site designs. The moment you commit a second style definition for a module, the quality of your output (the front-end code) takes a huge hit. There should only ever be one base style definition for each distinct module or component. Keep these in a separate, sacred place in your CSS. I use a _modules.scss Sass include file, imported near the top of my main CSS files.\n\nBe sure, of course, to avoid making changes to this file lightly, as the smallest adjustment can affect multiple pages (hint: keep a structure list of which modules are used on which pages). Avoid the inevitable temptation to duplicate code late in the project. Sticking to this rule becomes more important the more complex the codebase becomes.\n\nIf you can stick to this rule, using sensible class names and consistent HTML, you can reach a joyous, self-fulfilling plateau stage in each project where you are assembling each interface from your own set of carefully crafted building blocks.\n\nOld school markup\n\nLet\u2019s take a step back. Before we fret about creating a divinely pure modular CSS framework, we need to know the site\u2019s design and what it is made of. The best way to gain this knowledge is to go old school. Print out every comp, mockup, wireframe, sketch or whatever you have. If there are sections of pages that are hidden until some user action takes place, or if the page has multiple states, be sure that you have everything that could become visible to the user on paper.\n\nOnce you have your wedge of paper designs, lay out all the pages on the floor, or stick them to the wall if you can, arranging them logically according to the site hierarchy, by user journey, or whatever guidelines make most sense to you. Once you have the site laid out before you, study it for a while, familiarizing yourself with every part of every interface. This will eliminate nasty surprises late in the project when you realize you\u2019ve duplicated something, or left an interface on the drawing board altogether.\n\nNow that you know the site like it\u2019s your best friend, get out your pens or pencils of choice and attack it. Mark it up like there\u2019s no tomorrow. Pretend you\u2019re a spy trying to identify communications from an enemy network hiding their messages in newspapers. Look for patterns and similarities, drawing circles around them. These are your modules. Start also highlighting the differences between each instance of these modules, working out which is the most basic or common type that will become the base definition from which all other representations are extended.\n\nThis simple but empowering exercise will equip you for your task of actually crafting, instead of just building, the front-end. Without the knowledge gained from this kind of research phase, you will be blundering forward, improvising as best you can, but ultimately making quality-compromising mistakes that could have been avoided.\n\nFor more on this theme, read Anna Debenham\u2019s Front-end Style Guides which recommends a similar process, and the sublime idea of extending this into a guide to refer to during development and beyond.\n\nDesign homogeneity\n\nMoving forward again, you now have your modules defined and things are looking good. I mentioned that many instances of these modules will carry minor differences. These differences must be given significant thinking time, and discussion time with your designer(s).\n\nIt should be common knowledge by now that successful software projects are not the product of distinct design and build phases with little or no bidirectional feedback. The crucial nature of the designer-developer relationship has been covered in depth this year by Paul Robert Lloyd, and a joint effort from both teams throughout the project lifecycle is pivotal to your ability to craft and ship successful products.\n\nThis relationship comes into play when you\u2019re well into the development of the site, and you start noticing these differences between instances of modules (they\u2019ll start to stand out very clearly to you and your carefully regimented modular CSS system). Before you start overriding your base styles, question the differences with the designer to work out why they exist. Perhaps they are required and are important to their context, but perhaps they were oversights from earlier design revisions, or simple mistakes.\n\nThe craftsperson\u2019s gland\n\nAs you grow towards the levels of expertise and experience where you can proudly and honestly consider yourself a craftsperson, you will find that you steadily develop what initially feels like a kind of sixth sense. I think of it more as a new hormonal gland, secreting into your bloodstream a powerful messenger chemical that can either reward or punish your brain. This gland is connected directly to your core understanding of what good quality work looks and feels like, an understanding that itself improves with experience. \n\nThis gland will make itself known to you in two ways. First, when you solve a problem in a beautifully elegant way with clean and unobtrusive code that looks good and just feels right, your craftsperson\u2019s gland will ooze something delicious that makes your brain and soul glow from the inside out. You will beam triumphantly at the succinct lines of code on your computer display before bounding outside with a spring in your step to swim up glittering rainbows and kiss soft fluffy puppies.\n\nThe second way that you may become aware of your craftsperson\u2019s gland, though, is somewhat less pleasurable. In an alternate reality, your parallel self is faced with the same problem, but decides to take a shortcut and get around it by some dubious means \u2013 the kind of technical method that the words hack, kludge and bodge are reserved for. As soon as you have done this, or even as you are doing it, your craftsperson\u2019s gland will damn well let you know that you took the wrong fork in the road. As your craftsperson\u2019s gland begins to secrete a toxic pus, you will at first become entranced into a vacant stare at the monstrous mess you are considering unleashing upon your site\u2019s visitors, before writhing in the horrible agony of an itch that can never be scratched, and a feeling of being coated with the devil\u2019s own deep and penetrating filth that no shower will ever cleanse.\n\nPerhaps I exaggerate slightly, but it is no overstatement to suggest that you will find yourself being guided by proverbial angels and demons perched on opposite shoulders, or a whispering voice inside your head. If you harness this sense, sharpening it as if it were another tool in your kit and letting it guide or at least advise your decision making, you will transcend the rocky realm of random trial and error when faced with problems, and tend toward the right answers instinctively.\n\nThis gland can also empower your ability to assess your own work, becoming a judge before whom all your work is cross-examined. A good craftsperson regularly takes a step back from their work, and questions every facet of their product for its precise alignment with their core values of quality and sincerity, and even the very necessity of each component.\n\nThe wrapping\n\nBy now, you may be thinking that I take this kind of thing far too seriously, but to terrify you further, I haven\u2019t even shared the half of it. Hopefully, though, this gives you an idea of the kind of levels of professionalism and dedication that it should take to get you on your way to becoming a craftsperson. It\u2019s a level of accomplishment and ability toward which we all should strive, both for our personal fulfilment and the betterment of the products we use daily. I look forward to seeing your finely crafted work throughout 2012.", "year": "2011", "author": "Ben Bodien", "author_slug": "benbodien", "published": "2011-12-24T00:00:00+00:00", "url": "https://24ways.org/2011/crafting-the-front-end/", "topic": "process"} {"rowid": 271, "title": "Creating Custom Font Stacks with Unicode-Range", "contents": "Any web designer or front-end developer worth their salt will be familiar with the CSS @font-face rule used for embedding fonts in a web page. We\u2019ve all used it \u2014 either directly in our code ourselves, or via one of the web font services like Fontdeck, Typekit or Google Fonts.\n\nIf you\u2019re like me, however, you\u2019ll be used to just copying and pasting in a specific incantation of lines designed to get different formats of fonts working in different browsers, and may not have really explored all the capabilities of @font-face properties as defined by the spec.\n\nOne such property \u2014 the unicode-range descriptor \u2014 sounds pretty dull and is easily overlooked. It does, however, have some fairly interesting possibilities when put to use in creative ways.\n\nUnicode-range\n\nThe unicode-range descriptor is designed to help when using fonts that don\u2019t have full coverage of the characters used in a page. By adding a unicode-range property to a @font-face rule it is possible to specify the range of characters the font covers. \n\n@font-face {\n font-family: BBCBengali;\n src: url(fonts/BBCBengali.ttf) format(\"opentype\");\n unicode-range: U+00-FF;\n}\n\nIn this example, the font is to be used for characters in the range of U+00 to U+FF which runs from the unexciting control characters at the start of the Unicode table (symbols like the exclamation mark start at U+21) right through to \u00ff at U+FF \u2013 the extent of the Basic Latin character range.\n\nBy adding multiple @font-face rules for the same family but with different ranges, you can build up complete coverage of the characters your page uses by using different fonts.\n\nWhen I say that it\u2019s possible to specify the range of characters the font covers, that\u2019s true, but what you\u2019re really doing with the unicode-range property is declaring which characters the font should be used for. This becomes interesting, because instead of merely working with the technical constraints of available characters in a given font, we can start picking and choosing characters to use and selectively mix fonts together.\n\nThe best available ampersand\n\nA few years back, Dan Cederholm wrote a post encouraging designers to use the best available ampersand. Dan went on to outline how this can be achieved by wrapping our ampersands in a element with a class applied:\n\n&\n\nA CSS rule can then be written to select the and apply a different font:\n\nspan.amp {\n font-family: Baskerville, Palatino, \"Book Antiqua\", serif;\n}\n\nThat\u2019s a perfectly serviceable technique, but the drawbacks are clear \u2014 you have to add extra markup which is borderline presentational, and you also have to be able to add that markup, which isn\u2019t always possible when working with a CMS.\n\nPerhaps we could do this with unicode-range.\n\nA better best available ampersand\n\nThe Unicode code point for an ampersand is U+26, so the ampersand font stack above can be created like so:\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n\nWhat we\u2019ve done here is specify a new family called Ampersand and created a font stack for it with the user\u2019s locally installed copies of Baskerville, Palatino or Book Antiqua. We\u2019ve then limited it to a single character range \u2014 the ampersand. Of course, those don\u2019t need to be local fonts \u2014 they could be web font files, too. If you have a font with a really snazzy ampersand, go for your life.\n\nWe can then use that new family in a regular font stack.\n\nh1 {\n font-family: Ampersand, Arial, sans-serif;\n}\n\nWith this in place, any

      elements in our page will use the Ampersand family (Baskerville, Palatino or Book Antiqua) for ampersands, and Arial for all other characters. If the user doesn\u2019t have any of the Ampersand family fonts available, the ampersand will fall back to the next item in the font stack, Arial.\n\nYou didn\u2019t think it was that easy, did you?\n\nOh, if only it were so. The problem comes, as ever, with the issue of browser support. The unicode-range property has good support in WebKit browsers (like Safari and Chrome, and the browsers on most popular smartphone platforms) and in recent versions of Internet Explorer. The big stumbling block comes in the form of Firefox, which has no support at all.\n\nIf you\u2019re familiar with how CSS works when it comes to unsupported properties, you\u2019ll know that if a browser encounters a property it doesn\u2019t implement, it just skips that declaration and moves on to the next. That works perfectly for things like border-radius \u2014 if the browser can\u2019t round off the corners, the declaration is skipped and the user sees square corners instead. Perfect.\n\nLess perfect when it comes to unicode-range, because if no range is specified then the default is that the font is applied for all characters \u2014 the whole range. If you\u2019re using a fancy font for flamboyant ampersands, you probably don\u2019t want that applied to all your text if unicode-range isn\u2019t supported. That would be bad. Really bad.\n\nEnsuring good fallbacks\n\nAs ever, the trick is to make sure that there\u2019s a sensible fallback in place if a browser doesn\u2019t have support for whatever technology you\u2019re trying to use. This is where being a super nerd about understanding the spec you\u2019re working with really pays off.\n\nWe can make use of the rules of the CSS cascade to make sure that if unicode-range isn\u2019t supported we get a sensible fallback font. What would be ideal is if we were able to follow up the @font-face rule with a second rule to override it if Unicode ranges aren\u2019t implemented.\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n@font-face {\n font-family: 'Ampersand';\n src: local('Arial');\n}\n\nIn theory, this code should make sense for all browsers. For those that support unicode-range the two rules become cumulative. They specify different ranges for the same family, and in WebKit browsers this has the expected result of using Arial for most characters, but Baskerville and friends for the ampersand. For browsers that don\u2019t have support, the second rule should just supersede the first, setting the font to Arial. \n\nUnfortunately, this code causes current versions of Firefox to freak out and use the first rule, applying Baskerville to the entire range. That\u2019s both unexpected and unfortunate. Bad Firefox. On your rug.\n\nIf that doesn\u2019t work, what can we do? Well, we know that if given a unicode-range Firefox will ignore the range and apply the font to all characters. That\u2019s really what we\u2019re trying to achieve. So what if we specified a range for the fallback font, but made sure it only covers some obscure high-value Unicode character we\u2019re never going to use in our page? Then it wouldn\u2019t affect the outcome for browsers that do support ranges.\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n@font-face {\n /* Ampersand fallback font */\n font-family: 'Ampersand';\n src: local('Arial');\n unicode-range: U+270C;\n}\n\nBy specifying a range on the fallback font, Firefox appears to correctly override the first based on the cascade sort order. Browsers that do support ranges take the second rule in addition, and apply Arial for that obscure character we\u2019re not using in any of our pages \u2014 U+270C.\n\nSo we get our nice ampersands in browsers that support unicode-range and, thanks to our styling of an obscure Unicode character, the font falls back to a perfectly acceptable Arial in browsers that do not offer support. Perfect!\n\nThat obscure character, my friends, is what Unicode defines as the VICTORY HAND.\n\n\u270c\n\nSo, how can we use this?\n\nAmpersands are a neat trick, and it works well in browsers that support ranges, but that\u2019s not really the point of all this. Styling ampersands is fun, but they\u2019re only really scratching the surface. Consider more involved examples, such as substituting a different font for numerals, or symbols, or even caps. Things certainly begin to get a bit more interesting.\n\nHow do you know what the codes are for different characters? Richard Ishida has a handy online conversion tool available where you can type in the characters and get the Unicode code points out the other end.\n\nOf course, the fact remains that browser support for unicode-range is currently limited, so any application needs to have fallbacks that you\u2019re still happy for a significant proportion of your visitors to see. In some cases, such as dedicated pages for mobile devices in an HTML-based phone app, this is immediately useful as support in WebKit browsers is already very good. In other cases, you\u2019ll have to use your own best judgement based on your needs and audience.\n\nOne thing to keep in mind is that if you\u2019re using web fonts, the entire font will be downloaded even if only one character is used. That said, the font shouldn\u2019t be downloaded if none of the characters within the Unicode range are present in a given page.\n\nAs ever, there are pros and cons to using unicode-range as well as varied but increasing support in browsers. It remains a useful tool to understand and have in your toolkit for when the right moment comes along.", "year": "2011", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2011-12-01T00:00:00+00:00", "url": "https://24ways.org/2011/creating-custom-font-stacks-with-unicode-range/", "topic": "code"} {"rowid": 270, "title": "From Side Project to Not So Side Project", "contents": "In the last article I wrote for 24 ways, back in 2009, I enthused about the benefits of having a pet project, suggesting that we should all have at least one so that we could collaborate with our friends, escape our day jobs, fulfil our own needs, help others out, raise our profiles, make money, and \u2014 most importantly \u2014 have fun. I don\u2019t think I need to offer any further persuasions: it seems that designers and developers are launching their own pet projects left, right and centre. This makes me very happy.\n\nHowever, there still seems to be something of a disconnect between having a side project and turning it into something that is moderately successful; in particular, the challenge of making enough money to sustain the project and perhaps even elevating it from the sidelines so that it becomes something not so on the side at all.\n\nBefore we even begin this, let\u2019s spend a moment talking about money, also known as\u2026\n\nEvil, nasty, filthy money\n\nOver the last couple of years, I\u2019ve started referring to myself as an accidental businessman. I say accidental because my view of the typical businessman is someone who is driven by money, and I usually can\u2019t stand such people. Those who are motivated by profit, obsessed with growth, and take an active interest in the world\u2019s financial systems don\u2019t tend to be folks with whom I share a beer, unless it\u2019s to pour it over them. Especially if they\u2019re wearing pinstriped suits.\n\nThat said, we all want to make money, don\u2019t we? And most of us want to make a relatively decent amount, too. I don\u2019t think there\u2019s any harm in admitting that, is there? Hello, I\u2019m Elliot and I\u2019m a capitalist.\n\nThe key is making money from doing what we love. For most people I know in our community, we\u2019ve already achieved that \u2014 I\u2019m hard-pressed to think of anyone who isn\u2019t extremely passionate about working in our industry and I think it\u2019s one of the most positive, unifying benefits we enjoy as a group of like-minded people \u2014 but side projects usually arise from another kind of passion: a passion for something other than what we do as our day jobs. Perhaps it\u2019s because your clients are driving you mental and you need a break; perhaps it\u2019s because you want to create something that is truly your own; perhaps it\u2019s because you\u2019re sick of seeing your online work disappear so fast and you want to try your hand at print in order to make a more permanent mark.\n\nThe three factors I listed there led me to create 8 Faces, a printed magazine about typography that started as a side project and is now a very significant part of my yearly output and income.\n\nLike many things that prove fruitful, 8 Faces\u2019 success was something of an accident, too. For a start, the magazine was never meant to be profitable; its only purpose at all was to scratch my own itch. Then, after the first issue took off and I realized how much time I needed to spend in order to make the next one decent, it became clear that I would have to cover more than just the production costs: I\u2019d have to take time out from client work as well. Doing this meant I\u2019d have to earn some money. Probably not enough to equate to the exact amount of time lost when I could be doing client work (not that you could ever describe time as being lost when you work on something you love), but enough to survive; for me to feel that I was getting paid while doing all of the work that 8 Faces entailed. The answer was to raise money through partnerships with some cool companies who were happy to be associated with my little project.\n\nA sustainable business model\n\nBusiness model! I can\u2019t believe I just wrote those words! But a business model is really just a loose plan for how not to screw up. And all that stuff I wrote in the paragraph above about partnering with companies so I could get some money in while I put the magazine together? Well, that\u2019s my business model. \n\nIf you\u2019re making any product that has some sort of production cost, whether that\u2019s physical print run expenses or up-front dev work to get an app built, covering those costs before you even release your product means that you\u2019ll be in profit from the first copy you sell. This is no small point: production expenses are pretty much the only cost you\u2019ll ever need to recoup, so having them covered before you launch anything is pretty much the best possible position in which you could place yourself. Happy days, as Jamie Oliver would say.\n\nObtaining these initial funds through partnerships has another benefit. Sure, it\u2019s a form of advertising but, done right, your partners can potentially provide you with great content, too. In the case of 8 Faces, the ads look as nice as the rest of the magazine, and a couple of our partners also provide proper articles: genuinely meaningful, relevant, reader-pleasing articles at that. You\u2019d be amazed at how many companies are willing to become partners and, as the old adage goes, if you don\u2019t ask, you don\u2019t get.\n\nWith profit comes responsibility\n\nDon\u2019t forget about the responsibility you have to your audience if you engage in a relationship with a partner or any type of advertiser: although I may have freely admitted my capitalist leanings, I\u2019m still essentially a hairy hippy, and I feel that any partnership should be good for me as a publisher, good for the partner and \u2014 most importantly \u2014 good for the reader. Really, the key word here is relevance, and that\u2019s where 99.9% of advertising fails abysmally. \n\n(99.9% is not a scientific figure, but you know what I\u2019m on about.)\n\nThe main grey area when a side project becomes profitable is how you share that profit, partly because \u2014 in my opinion, at least \u2014 the transition from non-profitable side project to relatively successful source of income can be a little blurred. Asking for help for nothing when there\u2019s no money to be had is pretty normal, but sometimes it\u2019s easy to get used to that free help even once you start making money. I believe the best approach is to ask for help with the promise that it will always be rewarded as soon as there\u2019s money available. (Oh, god: this sounds like one of those nightmarish client proposals. It\u2019s not, honest.) If you\u2019re making something cool, people won\u2019t mind helping out while you find your feet.\n\nEvents often think that they\u2019re exempt from sharing profit. Perhaps that\u2019s because many event organizers think they\u2019re doing the speakers a favour rather than the other way around (that\u2019s a whole separate article), but it\u2019s shocking to see how many people seem to think they can profit from content-makers \u2014 speakers, for example \u2014 and yet not pay for that content. It was for this reason that Keir and I paid all of our speakers for our Insites: The Tour side project, which we ran back in July. We probably could\u2019ve got away without paying them, especially as the gig was so informal, but it was the right thing to do.\n\nIn conclusion: money as a by-product\n\nLet\u2019s conclude by returning to the slightly problematic nature of money, because it\u2019s the pivot on which your side project\u2019s success can swing, regardless of whether you measure success by monetary gain. I would argue that success has nothing to do with profit \u2014 it\u2019s about you being able to spend the time you want on the project. Unfortunately, that is almost always linked to money: money to pay yourself while you work on your dream idea; money to pay for more servers when your web app hits the big time; money to pay for efforts to get the word out there. The key, then, is to judge success on your own terms, and seek to generate as much money as you see fit, whether it\u2019s purely to cover your running costs, or enough to buy a small country. There\u2019s nothing wrong with profit, as long as you\u2019re ethical about it. (Pro tip: if you\u2019ve earned enough to buy a small country, you\u2019ve probably been unethical along the way.)\n\nThe point at which individuals and companies fail \u2014 in the moral sense, for sure, but often in the competitive sense, too \u2014 is when money is the primary motivation. It should never be the primary motivation. If you\u2019re not passionate enough about something to do it as an unprofitable side project, you shouldn\u2019t be doing it all. \n\nEarning money should be a by-product of doing what you love. And who doesn\u2019t want to spend their life doing what they love?", "year": "2011", "author": "Elliot Jay Stocks", "author_slug": "elliotjaystocks", "published": "2011-12-22T00:00:00+00:00", "url": "https://24ways.org/2011/from-side-project-to-not-so-side-project/", "topic": "business"} {"rowid": 269, "title": "Adaptive Images for Responsive Designs\u2026 Again", "contents": "When I was asked to write an article for 24 ways I jumped at the chance, as I\u2019d been wanting to write about some fun hacks for responsive images and related parsing behaviours. My heart sank a little when Matt Wilcox beat me to the subject, but it floated back up when I realized I disagreed with his method and still had something to write about.\n\nSo, Matt Wilcox, if that is your real name (and I\u2019m pretty sure it is), I disagree. I see your dirty server-based hack and raise you an even dirtier client-side hack. Evil laugh, etc., etc.\n\nYou guys can stomach yet another article about responsive design, right? Right?\n\nHalf the room gets up to leave\n\nWhoa, whoa\u2026 OK, I\u2019ll cut to the chase\u2026\n\nTL;DR\n\nIn a previous episode, we were introduced to Debbie and her responsive cat poetry page. Well, now she\u2019s added some reviews of cat videos and some images of cats. Check out her new page and have a play around with the browser window. At smaller widths, the images change and the design responds. The benefits of this method are:\n\n\n\tit\u2019s entirely client-side\n\timages are still shown to users without JavaScript\n\tyour media queries stay in your CSS file\n\tno repetition of image URLs\n\tno extra downloads per image\n\tit\u2019s fast enough to work on resize\n\tit\u2019s pure filth\n\n\nWhat\u2019s wrong with the server-side solution?\n\nResponsive design is a client-side issue; involving the server creates a boatload of problems.\n\n\n\tIt sets a cookie at the top of the page which is read in subsequent requests. However, the cookie is not guaranteed to be set in time for requests on the same page, so the server may see an old value or no value at all.\n\tServing images via server scripts is much slower than plain old static hosting.\n\tThe URL can only cache with vary: cookie, so the cache breaks when the cookie changes, even if the change is unrelated. Also, far-future caching is out for devices that can change width.\n\tIt depends on detecting screen width, which is rather messy on mobile devices.\n\tResponding to things other than screen width (such as DPI) means packing more information into the cookie, and a more complicated script at the top of each page.\n\n\nSo, why isn\u2019t this straightforward on the client?\n\nClient-side solutions to the problem involve JavaScript testing user agent properties (such as screen width), looping through some images and setting their URLs accordingly. However, by the time JavaScript has sprung into action, the original image source has already started downloading. If you change the source of an image via JavaScript, you\u2019re setting off yet another request.\n\nImages are downloaded as soon as their DOM node is created. They don\u2019t need to be visible, they don\u2019t need to be in the document.\n\nnew Image().src = url\n\nThe above will start an HTTP request for url. This is a handy trick for quick requests and preloading, but also shows the browser\u2019s eagerness to download images.\n\nHere\u2019s an example of that in action. Check out the network tab in Web Inspector (other non-WebKit development aids are available) to see the image requests.\n\nBecause of this, some client-side solutions look like this:\n\n\n\nwhere t.gif is a 1\u00d71px tiny transparent GIF.\n\nThis results in no images if JavaScript isn\u2019t available. Dealing with the absence of JavaScript is still important, even on mobile. I was recently asked to make a website work on an old Blackberry 9000. I was able to get most of the way there by preventing that OS parsing any JavaScript, and that was only possible because the site didn\u2019t depend on it.\n\nWe need to delay loading images for JavaScript users, but ensure they load for users without JavaScript. How can we conditionally parse markup depending on JavaScript support?\n\nOh yeah!

      \n
      \n

      \n\n
      \nAnd now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript:\n// Embed the SQL query in a multi-line backtick string:\nconst sql = `select\n snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,\n articles_fts.rank, articles.title, articles.url, articles.author, articles.year\nfrom articles\n join articles_fts on articles.rowid = articles_fts.rowid\nwhere articles_fts match :search || \"*\"\n order by rank limit 10`;\n\n// Grab a reference to the \nconst searchbox = document.getElementById(\"searchbox\");\n\n// Used to avoid race-conditions:\nlet requestInFlight = null;\n\nsearchbox.onkeyup = debounce(() => {\n const q = searchbox.value;\n // Construct the API URL, using encodeURIComponent() for the parameters\n const url = (\n \"https://search-24ways.herokuapp.com/24ways-866073b.json?sql=\" +\n encodeURIComponent(sql) +\n `&search=${encodeURIComponent(q)}&_shape=array`\n );\n // Unique object used just for race-condition comparison\n let currentRequest = {};\n requestInFlight = currentRequest;\n fetch(url).then(r => r.json()).then(d => {\n if (requestInFlight !== currentRequest) {\n // Avoid race conditions where a slow request returns\n // after a faster one.\n return;\n }\n let results = d.map(r => `\n
      \n

      ${htmlEscape(r.title)}

      \n

      ${htmlEscape(r.author)} - ${r.year}

      \n

      ${highlight(r.snippet)}

      \n
      \n `).join(\"\");\n document.getElementById(\"results\").innerHTML = results;\n });\n}, 100); // debounce every 100ms\nThere\u2019s just one more utility function, used to help construct the HTML results:\nconst highlight = (s) => htmlEscape(s).replace(\n /b4de2a49c8/g, ''\n).replace(\n /8c94a2ed4b/g, ''\n);\nThis is what those unique strings passed to the snippet() function were for.\nAvoiding race conditions in autocomplete\nOne trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case:\n\nUser types acces\nBrowser sends request A - querying documents matching acces*\nUser continues to type accessibility\nBrowser sends request B - querying documents matching accessibility*\nRequest B returns. It was fast, because there are fewer documents matching the full term\nThe results interface updates with the documents from request B, matching accessibility*\nRequest A returns results (this was the slower of the two requests)\nThe results interface updates with the documents from request A - results matching access*\n\nThis is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on.\nThankfully there\u2019s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null.\nAny time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well.\nWhen the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results.\nIt\u2019s not a lot of code, really\nAnd that\u2019s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don\u2019t think it matters. You\u2019re welcome to refactor it as much you like.\nHow good is this search implementation? I\u2019ve been building search engines for a long time using a wide variety of technologies and I\u2019m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it\u2019s based on SQL makes it easy and flexible to work with.\nA surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite.\nMore importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you\u2019re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now.\nFor more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.", "year": "2018", "author": "Simon Willison", "author_slug": "simonwillison", "published": "2018-12-19T00:00:00+00:00", "url": "https://24ways.org/2018/fast-autocomplete-search-for-your-website/", "topic": "code"} {"rowid": 248, "title": "How to Use Audio on the Web", "contents": "I know what you\u2019re thinking. I never never want to hear sound anywhere near a browser, ever ever, wow! \ud83d\ude49\nYou\u2019re having flashbacks, flashbacks to the days of yore, when we had a element and yup did everyone think that was the most rad thing since . I mean put those two together with a , only use CSS colour names, make sure your borders were all set to ridge and you\u2019ve got yourself the neatest website since 1998.\nThe sound played when the website loaded and you could play a MIDI file as well! Everyone could hear that wicked digital track you chose. Oh, surfing was gnarly back then.\nYes it is 2018, the end of in fact, soon to be 2019. We are certainly living in the future. Hoverboards self driving cars, holodecks VR headsets, rocket boots drone racing, sound on websites get real, Ruth.\nWe can\u2019t help but be jaded, even though the element is depreciated, and the autoplay policy appeared this year. Although still in it\u2019s infancy, the policy \u201ccontrols when video and audio is allowed to autoplay\u201d, which should reduce the somewhat obtrusive playing of sound when a website or app loads in the future.\nBut then of course comes the question, having lived in a muted present for so long, where and why would you use audio?\n\u2728 Showcase Time \u2728\nThere are some incredible uses of audio on websites today. This is my personal favourite futurelibrary.no, a site from Norway chronicling books that have been published from a forest of trees planted precisely for the books themselves. The sound effects are lovely, adding to the overall experience.\nfuturelibrary.no\nAnother site that executes this well is pottermore.com. The Hogwarts WebGL simulation uses both sound effects and ambient background music and gives a great experience. The button hovers are particularly good.\npottermore.com\nEighty-six and a half years is a beautiful narrative site, documenting the musings of an eighty-six and a half year old man. The background music playing on this site is not offensive, it adds to the experience.\nEighty-six and a half years\nSound can be powerful and in some cases useful. Last year I wrote about using them to help validate forms. Audiochart is a library which \u201callows the user to explore charts on web pages using sound and the keyboard\u201d. Ben Byford recorded voice descriptions of the pages on his website for playback should you need or want it. There is a whole area of accessibility to be explored here.\nThen there\u2019s education. Fancy beginning with some piano in the new year? flowkey.com is a website which allows you to play along and learn at the same time. Need to brush up on your music theory? lightnote.co takes you through lessons to do just that, all audio enhanced. Electronic music more your thing? Ableton has your back with learningmusic.ableton.com, a site which takes you through the process of composing electronic music. A website, all made possible through the powers with have with the Web Audio API today.\nlightnote.co\nlearningmusic.ableton.com\nConsiderations\nYes, tis the season, let\u2019s be more thoughtful about our audios. There are some user experience patterns to begin with. 86andahalfyears.com tells the user they are about to \u2018enter\u2019 the site and headphones are recommended. This is a good approach because it a) deals with the autoplay policy (audio needs to be instigated by a user gesture) and b) by stating headphones are recommended you are setting the users expectations, they will expect sound, and if in a public setting can enlist the use of a common electronic device to cause less embarrassment.\nEighty-six and a half years\nAllowing mute and/or volume control clearly within the user interface is a good idea. It won\u2019t draw the user out of the experience, it\u2019ll give more control to the user about what audio they want to hear (they may not want to turn down the volume of their entire device), and it\u2019s less thought to reach for a very visible volume than to fumble with device settings.\nIndicating that sound is playing is also something to consider. Browsers do this by adding icons to tabs, but this isn\u2019t always the first place to look for everyone.\nTo The Future\nSo let\u2019s go!\nWe see amazing demos built with Web Audio, and I\u2019m sure, like me, they make you think, oh wow I wish I could do that / had thought of that / knew the first thing about audio to begin to even conceive that.\nBut audio doesn\u2019t actually need to be all bells and whistles (hey, it\u2019s Christmas). Starting, stopping and adjusting simple panning and volume might be all you need to get started to introduce some good sound design in your web design.\nIsn\u2019t it great then that there\u2019s a tutorial just for that! Head on over to the MDN Web Audio API docs where the Using the Web Audio API article takes you through playing and pausing sounds, volume control and simple panning (moving the sound from left to right on stereo speakers).\nThis year I believe we have all experienced the web as a shopping mall more than ever. It\u2019s shining store fronts, flashing adverts, fast food, loud noises.\nLet\u2019s use 2019 to create more forests to explore, oceans to dive and mountains to climb.", "year": "2018", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2018-12-22T00:00:00+00:00", "url": "https://24ways.org/2018/how-to-use-audio-on-the-web/", "topic": "design"} {"rowid": 247, "title": "Managing Flow and Rhythm with CSS Custom Properties", "contents": "An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn\u2019t just make your web pages and views look better, but it can also improve the scan-ability. \nBrowsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as
      and
      also have no default margin or padding associated with them. \nI\u2019ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I\u2019m going to show you how it works. Let\u2019s dive in!\nThe Flow utility\nWith the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it\u2019s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid. \nThat\u2019s fine for 90% of contexts, but sometimes, it\u2019s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy.\nThe code\n.flow {\n --flow-space: 1em;\n} \n\n.flow > * + * { \n margin-top: var(--flow-space);\n}\nWhat this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don\u2019t have any idea what surrounds them. \nThe other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let\u2019s look at some examples.\nA basic example\nSee the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen.\nhttps://codepen.io/hankchizljaw/pen/LXqerj\nWhat we\u2019ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there\u2019s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility. \nBecause our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a

      in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we\u2019d affect everything because margin values would be calculated as 1.1X the font size. \nThis example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though? \nSee the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen.\nhttps://codepen.io/hankchizljaw/pen/qQgxaY\nI like lots of whitespace with my article layouts, so the 1em space isn\u2019t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances:\nh2 {\n --flow-space: 3rem;\n}\nNotice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it\u2019s better for accessibility to use flexible units like em, rem and %, so that a user\u2019s font size preferences are honoured. \nA more advanced example\nAlthough the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space.\nSee the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen.\nhttps://codepen.io/hankchizljaw/pen/ZmPGyL\nIn this example, we\u2019ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article\u2019s content would space itself:\n
      \n ...\n
      \nSomething to remember though: the custom properties cascade in the same way that other CSS values do, so you\u2019ve got to keep that in mind. We\u2019ve got a great example of that in this example where because we\u2019ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we\u2019ve had to set another value on the .features__list element.\n\u201cBut what about old browsers?\u201d, I hear you cry\nWe\u2019re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element\u2019s font-size:\n.flow {\n --flow-space: 1em;\n}\n\n.flow > * + * { \n margin-top: 1em;\n margin-top: var(--flow-space);\n}\nThanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them\u2014those that don\u2019t will ignore them. Yay for the cascade and progressive enhancement! \nWrapping up\nThis tiny little utility can bring great power for when you want to consistently space elements, vertically. It also\u2014thanks to the power of the modern web\u2014allows us to create contextual overrides without creating modifier classes or shame CSS. \nIf you\u2019ve got other methods of doing this sort of work, please let me know on Twitter. I\u2019d love to see what you\u2019re working on!", "year": "2018", "author": "Andy Bell", "author_slug": "andybell", "published": "2018-12-07T00:00:00+00:00", "url": "https://24ways.org/2018/managing-flow-and-rhythm-with-css-custom-properties/", "topic": "code"} {"rowid": 246, "title": "Designing Your Site Like It\u2019s 1998", "contents": "It\u2019s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary\u2014and my fourteenth contribution to 24 ways\u2014 I\u2019d like to explain how I would\u2019ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films.\nMy design for Planes, Trains and Automobiles is fixed at 800px wide.\nDeveloping a framework\nI\u2019ll start by using frames to set up the framework for this new website. Frames are individual pages\u2014one for navigation, the other for my content\u2014pulled together to form a frameset. Space is limited on lower-resolution screens, so by using frames I can ensure my navigation always remains visible. I can include any number of frames inside a element.\nI add two rows to my ; the first is for my navigation and is 50px tall, the second is for my content and will resize to fill any available space. As I don\u2019t want frame borders or any space between my frames, I set frameborder and framespacing attributes to 0:\n\n[\u2026]\n\nNext I add the source of my two frame documents. I don\u2019t want people to be able to resize or scroll my navigation, so I add the noresize attribute to that frame:\n\n\n\n\nI do want links from my navigation to open in the content frame, so I give each a name so I can specify where I want links to open:\n\n\n\n\nThe framework for this website is simple as it contains only two horizontal rows. Should I need a more complex layout, I can nest as many framesets\u2014and as many individual documents\u2014as I need:\n\n \n \n \n \n \n\nLetterbox framesets were common way to deal with multiple screen sizes. In a letterbox, the central frameset had a fixed height and width, while the frames on the top, right, bottom, and left expanded to fill any remaining space.\nHandling older browsers\nSadly not every browser supports frames, so I should send a helpful message to people who use older browsers asking them to upgrade. Happily, I can do that using noframes content:\n\n<body>\n<p>This page uses frames, but your browser doesn\u2019t support them. \n Please upgrade your browser.</p>\n</body>\n\nForcing someone back into a frame\nSometimes, someone may follow a link to a page from a portal or search engine, or they might attempt to open it in a new window or tab. If that page properly belongs inside a , people could easily miss out on other parts of a design. This short script will prevent this happening and because it\u2019s vanilla Javascript, it doesn\u2019t require a library such as jQuery:\n\n\nLaying out my page\nBefore starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website:\n\nI want absolute control over how people experience my design and don\u2019t want to allow it to stretch, so I first need a

  • which limits the width of my layout to 800px. The align attribute will keep this
    in the centre of someone\u2019s screen:\n
    \n \n \n \n
    [\u2026]
    \nAlthough they were developed for displaying tabular information, the cells and rows which make up the element make it ideal for the precise implementation of a design. I need several tables\u2014often nested inside each other\u2014to implement my design. These include tables for a banner and three rows of content:\n
    \n
    [\u2026]
    \n \n
    \n
    [\u2026]
    \n
    \n \n [\u2026]
    \n [\u2026]
    \n\nThe width of the first table\u2014used for my banner\u2014is fixed to match the logo it contains. As I don\u2019t need borders, padding, or spacing between these cells, I use attributes to remove them:\n\n \n \n \n
    \"Logo\"
    \nThe next table\u2014which contains the largest image, introduction, and a call-to-action\u2014is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images:\n\n \n \n\nThe height and width of these \u201cshims\u201d or \u201cspacers\u201d is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development.\nFor the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images:\n\n \u00a0\n [\u2026]\n\n\n\n \u00a0\n\nI use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table\u2014this time with a white background\u2014inside it:\n\n \n \n \n
    \n \n[\u2026]\n
    \n
    \nAdding details\nTables are fabulous tools for laying out a page, but they\u2019re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my \u201cBuy the DVD\u201d call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button:\n\n \n \n\n \n\n \n \n
    \n
    \n Buy the DVD\n
    \n
    \nI use those same elements to add details to headlines and lists too. Adding a \u201cbullet\u201d to each item in a list needs only two additional table cells, a circular graphic, and a spacer:\n\n \n \n \n \n \n
    \u00a0\u00a0Directed by John Hughes
    \nImplementing a typographic hierarchy\nSo far I\u2019ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use elements to change the typeface from the browser\u2019s default to any font installed on someone\u2019s device:\nPlanes, Trains and Automobiles is a comedy film [\u2026]\nTo adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser\u2019s default. I use a size of 4 for this headline and 2 for the text which follows:\nSteve Martin\n\nAn American actor, comedian, writer, producer, and musician.\nWhen I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website.\nNB: I use as many
    elements as needed to create space between headlines and paragraphs.\nView the final result (and especially the source.)\nMy modern day design for Planes, Trains and Automobiles.\nI can imagine many people reading this and thinking \u201cThis is terrible advice because we don\u2019t develop websites like this in 2018.\u201d That\u2019s true.\nWe have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes:\nfont-variant-caps: titling-caps;\nfont-variant-ligatures: common-ligatures;\nfont-variant-numeric: oldstyle-nums;\nGrid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS:\nbody {\n display: grid;\n grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr;\n grid-template-rows: auto;\n grid-column-gap: 2vw;\n grid-row-gap: 1vh;\n}\nFlexbox has made it easy to develop flexible components such as navigation links:\nnav ul { display: flex; }\nnav li { flex: 1; }\nJust one line of CSS can create multiple columns of fluid type:\nmain { column-width: 12em; }\nCSS Shapes enable text to flow around irregular shapes including polygons:\n[src*=\"main-img\"] {\n float: left;\n shape-outside: polygon(\u2026);\n}\nToday, we wouldn\u2019t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead:\n.btn {\n background: linear-gradient(#8B1212, #DD3A3C);\n border-radius: 1em;\n box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50);\n}\nCSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we\u2019re certain we know how to design and develop products and websites better than we did at the end of 1998.\nStrange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That\u2019s why it\u2019s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today\u2014tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass\u2014will be any more relevant in the future than elements, frames, layout tables, and spacer images are today.\nI have no prediction for what the web will be like twenty years from now. However, I want to believe we\u2019ll build on what we\u2019ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won\u2019t be repeated.\n\nHead over to my website if you\u2019d like to read about how I\u2019d implement my design for \u2018Planes, Trains and Automobiles\u2019 today.", "year": "2018", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2018-12-23T00:00:00+00:00", "url": "https://24ways.org/2018/designing-your-site-like-its-1998/", "topic": "code"} {"rowid": 245, "title": "Web Content Accessibility Guidelines 2.1\u2014for People Who Haven\u2019t Read the Update", "contents": "Happy United Nations International Day of Persons with Disabilities 2018! The United Nations chose \u201cEmpowering persons with disabilities and ensuring inclusiveness and equality\u201d as this year\u2019s theme. We\u2019ve seen great examples of that in 2018; for example, Paul Robert Lloyd has detailed how he improved the accessibility of this very website. \nOn social media, US Congressmember-Elect Alexandria Ocasio-Cortez started using the Clipomatic app to add live captions to her Instagram live stories, conforming to success criterion 1.2.4, \u201cCaptions (Live)\u201d of the Web Content Accessibility Guidelines (figure 1) \u2026and British Vogue Contributing Editor Sin\u00e9ad Burke has used the split-screen feature of Instagram live stories to invite an interpreter to provide live Sign Language interpretation, going above and beyond success criterion 1.2.6, \u201cSign Language (Prerecorded)\u201d of the Web Content Accessibility Guidelines (figure 2).\n\nFigure 1: Screenshot of Alexandria Ocasio-Cortez\u2019s Instagram story with live captionsFigure 2: Screenshot of Sin\u00e9ad Burke\u2019s Instagram story with Sign Language Interpretation\nThat theme chimes with this year\u2019s publication of the World Wide Web Consortium (W3C)\u2019s Web Content Accessibility Guidelines (WCAG) 2.1. In last year\u2019s \u201cWeb Content Accessibility Guidelines\u2014for People Who Haven\u2019t Read Them\u201d, I mentioned the scale of the project to produce this update during 2018: \u201cthe editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible\u201d. \nThe WCAG working group have added 17 success criteria to the 61 that they released way back in 2008\u2014for context, that was 1\u00bd years before Apple released their first iPad! These new criteria make it easier than ever for us web geeks to produce work that is more accessible to people using mobile devices and touchscreens, people with low vision, and people with cognitive and learning disabilities. \nOnce again, let\u2019s rip off all the legalese and ambiguous terminology like wrapping paper, and get up to date.\nCan your users perceive the information on your website?\nThe first guideline has criteria that help you prevent your users from asking, \u201cWhat the **** is this thing here supposed to be?\u201d We\u2019ve seven new criteria for this guideline.\n1.3.4 Some people can\u2019t easily change the orientation of the device that they use to browse the web, and so you should make sure that your users can use your website in portrait orientation and in landscape orientation. Consider how people slowly twirl presents that they have plucked from under the Christmas tree, to find the appropriate orientation\u2014and expect your users to do likewise with your websites and apps. We\u2019ve had 18\u00bd years since John Allsopp\u2019s revelatory Dao of Web Design enlightened us to \u201cembrace the fact that the web doesn\u2019t have the same constraints\u201d as printed pages, and to \u201cdesign for this flexibility\u201d. So, even though this guideline doesn\u2019t apply to websites where \u201ca specific display orientation is essential,\u201d such as a piano tutorial, always ask yourself, \u201cWhat would John Allsopp do?\u201d\n1.3.5 You should help the user\u2019s browser to automatically complete\u2013or not complete\u2013form fields, to save the user some time and effort. The surprisingly powerful and flexible autocomplete attribute for input elements should prove most useful here. If you\u2019ve used microformats or microdata to mark up information about a person, the autocomplete attribute\u2019s range of values should seem familiar. I like how the W3\u2019s \u201cUsing HTML 5.2 autocomplete attributes\u201d says that autocompleted values in forms help \u201cthose with dexterity disabilities who have trouble typing, those who may need more time, and anyone who wishes to reduce effort to fill out a form\u201d (emphasis mine). Um\u2026\ud83d\ude4b\u200d\u2642\ufe0f\n1.3.6 I like this one a lot, because it can help a huge audience to overcome difficulties that might prevent them from ever using the web. Some people have cognitive difficulties that affect their memory, focus, attention, language processing, and/or decision-making. Those users often rely on assistive technologies that present information through proprietary symbols, summaries of content, and keyboard shortcuts. You could use ARIA landmarks to identify the regions of each webpage. You could also keep an eye on the W3C\u2019s ongoing work on Personalisation Semantics.\n1.4.10 If you were to find a Nintendo Switch and \u201cSuper Mario Odyssey\u201d under your Christmas tree, you would have many hours of enjoyably scrolling horizontally and vertically to play the game. On the other hand, if you had to zoom a webpage to 400% so that you could read the content, you might have many hours of frustratedly scrolling horizontally and vertically to read the content. Learned reader, I assume you understand the purpose and the core techniques of Responsive Web Design. I also assume you\u2019re getting up to speed with the new Grid, Flexbox, and Box Alignment techniques for layout, and overflow-wrap. Using those skills, you should make sure that all content and functionality remain available when the browser is 320px wide, without your user needing to scroll horizontally. (For vertical text, you should make sure that all content and functionality remain available when the browser is 256px high, without your user needing to scroll vertically.) You don\u2019t have to do this for anything that would lose meaning if you restructured it into one narrow column. That includes some images, maps, diagrams, video, games, presentations, and data tables. Remember to check how your media queries affect font size: your user might find that text becomes smaller as they zoom into the webpage. So, test this one on real devices, or\u2014better yet\u2014test it with real users.\n1.4.11 In \u201cWeb Content Accessibility Guidelines\u2014for People Who Haven\u2019t Read Them\u201d, I recommended bookmarking Lea Verou\u2019s Contrast Ratio calculator for checking that text contrasts enough with its background (for success criteria 1.4.3 and 1.4.6), so that more people can read it more easily. For this update, you should make sure that form elements and their focus states have a 3:1 contrast ratio with the colour around them. This doesn\u2019t apply to controls that use the browser\u2019s default styling. Also, you should make sure that graphics that convey information have a 3:1 contrast ratio with the colour around them.\n1.4.12 Some people, due to low vision or dyslexia, might need to modify the typography that you agonised over. Research indicates that you should make sure that all content and functionality would remain available if a user were to set:\n\nline height to at least 1\u00bd \u00d7 the font size;\nspace below paragraphs to at least 2 \u00d7 the font size;\nletter spacing to at least 0.12 \u00d7 the font size;\nword spacing to at least 0.16 \u00d7 the font size.\n\nTo test this, check for text overlapping, text hiding behind other elements, or text disappearing.\n1.4.13 Sometimes when visiting a website, you hover over\u2014or tab on to\u2014something that unleashes a newsletter subscription pop-up, some suggested \u201crelated content\u201d, and/or a GDPR-related pop-up. On a well-designed website, you can press the Esc key on your keyboard or click a prominent \u201cClose\u201d button or \u201cX\u201d button to vanquish such intrusions. If the Esc key fails you, or if you either can\u2019t see or can\u2019t click the \u201cClose\u201d button\u2026well, you\u2019ll probably just close that browser tab. This situation can prove even more infuriating for users with low vision or cognitive disabilities. So, if new content appears when your user hovers over or tabs on to some element, you should make sure that:\n\nyour user can dismiss that content without needing to move their pointer or tab on to some other element (this doesn\u2019t apply to error warnings, or well-behaved content that doesn\u2019t obscure or replace other content);\nthe new content remains visible while your user moves their cursor over it;\nthe new content remains visible as long as the user hovers over that element or dismisses that content\u2014or until the new content is no longer valid.\n\nThis doesn\u2019t apply to situations such as hovering over an element\u2019s title attribute, where the user\u2019s browser controls the display of the content that appears.\nCan users operate the controls and links on your website?\nThe second guideline has criteria that help you prevent your users from asking, \u201cHow the **** does this thing work?\u201d We\u2019ve nine new criteria for this guideline.\n2.1.4 Some websites offer keyboard shortcuts for users. For example, the keyboard shortcuts for Gmail allow the user to press the \u21e7 key and u to mark a message as unread. Usually, shortcuts on websites include modifier keys, such as Ctrl, along with a letter, number, or punctuation symbol. Unfortunately, users who have dexterity challenges sometimes trigger those shortcuts by accident, and that can make a website impossible to use. Also, speech input technology can sometimes trigger those shortcuts. If your website offers single-character keyboard shortcuts, you must allow your user to turn off or remap those shortcuts. This doesn\u2019t apply to single-character keyboard shortcuts that only work when a control, such as drop-down list, has focus.\n2.2.6 If your website uses a timeout for some process, you could store the user\u2019s data for at least 20 hours, so that users with cognitive disabilities can take a break or take longer than usual to complete the process without losing their place or losing their data. Alternatively, you could warn the user, at the start of the process, about that the website will timeout after whatever amount of time you have chosen. \n2.3.3 If your website has some non-essential animation (such as parallax scrolling) that starts when the user does some particular action, you could allow the user to turn off that animation so that you avoid harming users with vestibular disorders. The prefers-reduced-motion media query currently has limited browser support, but you can start using it now to avoid showing animations to users who select the \u201cReduce Motion\u201d setting (or equivalent) in their device\u2019s operating system:\n@media (prefers-reduced-motion: reduce) {\n .MrFancyPants {\n animation: none;\n }\n}\n2.5.1 Some websites let users use multi-touch gestures on touchscreen devices. For example, Google Maps allows users to pinch with two fingers to zoom out and \u201cunpinch\u201d with two fingers to zoom in. Also, some websites allow users to drag a finger to do some action, such as changing the value on an input element with type=\"range\", or swiping sideways to the next photograph in a gallery. Some users with dexterity challenges, and some users who use a head pointer, an eye-gaze system, or speech-controlled mouse emulation, might find multi-touch gestures or dragging impossible. You must make sure that your website supports single-tap alternatives to any multi-touch gestures or dragging actions that it provides. For example, if your website lets someone pinch and unpinch a map to zoom in and out, you must also provide buttons that a user can tap to zoom in and out.\n2.5.2 This might be my favourite accessibility criterion ever! Did you ever touch or press a \u201cSend\u201d button but then immediately realise that you really didn\u2019t want to send the message, and so move your finger or cursor away from the \u201cSend\u201d button before lifting your finger?! Imagine how many arguments that functionality has prevented. \ud83d\ude0c You must make sure that touching or pressing does not cause anything to happen before the user raises their finger or cursor, or make sure that the user can move their finger or cursor away to prevent the action. In JavaScript, prefer onclick to onmousedown, unless your website has actions that need onmousedown. Also, this doesn\u2019t apply to actions that need to happen as soon as the user clicks or touches. For example, a user playing a \u201cWhac-A-Mole\u201d game or a piano emulator needs the action to happen as soon as they click or touch the screen.\n2.5.3 Recently, entrepreneur and social media guru Gary Vaynerchuk has emphasised the rise of audio and voice as output and input. He quotes a Google statistic that says one in five search queries use voice input. Once again, users with disabilities have been ahead of the curve here, having used screen readers and/or dictation software for many years. You must make sure that the text that appears on a form control or image matches how your HTML identifies that form control or image. Use proper semantic HTML to achieve this:\n\nuse the label element to pair text with the corresponding input element;\nuse an alt attribute value that exactly matches any text that appears in an image;\nuse an aria-labelledby attribute value that exactly matches the text that appears in any complex component.\n\n2.5.4 Modern Web APIs allow web developers to specify how their website will react to the user shaking, tilting, or gesturing towards their device. Some users might find those actions difficult, impossible, or embarrassing to perform. If you make any functionality available when the user shakes, tilts, or gestures towards their device, you must provide form controls that make that same functionality available. As usual, this doesn\u2019t apply to websites that require shaking, tilting, or gesturing; this includes some games and music programmes. John Gruber describes the iPhone\u2019s \u201cShake to Undo\u201d gesture as \u201cdreadful \u2014 impossible to discover through exploration of the on-screen [user interface], bad for accessibility, and risks your phone flying out of your hand\u201d. This accessibility criterion seems to empathise with John: you must make sure that your user can prevent your website from responding to shaking, tilting and/or gesturing towards their device.\n2.5.5 Homer Simpson\u2019s telephone famously complained, \u201cThe fingers you have used to dial are too fat.\u201d I think we\u2019ve all felt like that when using phones and tablets, particularly when trying to dismiss pop-ups and ads. You could make interactive elements at least 44px wide \u00d7 44px high. Apple\u2019s \u201cHuman Interface Guidelines\u201d agree: \u201cProvide ample touch targets for interactive elements. Try to maintain a minimum tappable area of 44pt x 44pt for all controls.\u201d This doesn\u2019t apply to links within inline text, or to unsoiled elements.\n2.5.6 Expect your users to use a variety of input devices they want, and to change from one to another whenever they please. For example, a user with a tablet and keyboard might jab icons on the screen while typing on the keyboard, or a user might dictate text while alone and then type on a keyboard when a colleague arrives. You could make sure that your website allows your users to use whichever available input modality they choose. Once again, this doesn\u2019t apply to websites that require a specific modality; this includes typing tutors and music programmes.\nCan users understand your content?\nThe third guideline has criteria that help you prevent your users from asking, \u201cWhat the **** does this mean?\u201d We\u2019ve no new criteria for this guideline. \nHave you made your website robust enough to work on your users\u2019 browsers and assistive technologies?\nThe fourth and final guideline has criteria that help you prevent your users from asking, \u201cWhy the **** doesn\u2019t this work on my device?\u201d We\u2019ve one new criterion for this guideline.\n4.1.3 Sometimes you need to let your user know the status of something: \u201cDid it work OK? What was the error? How far through it are we?\u201d However, you should avoid making your user lose their place on the webpage, and so you should let them know the status without opening a new window, focusing on another element, or submitting a form. To do this properly for assistive technology users, choose the appropriate ARIA role for the new content; for example: \n\nif your user needs to know, \u201cDid it work OK?\u201d, add role=\"status\u201d;\nif your user needs to know, \u201cWhat was the error?\u201d, add role=\"alert\u201d;\nif you user needs to know, \u201cHow far through it are we?\u201d, add role=\"log\" (for a chat window) or role=\"progressbar\" (for, well, a progress bar).\n\nBetter design for humans\nMy favourite of Luke Wroblewski\u2019s collection of Design Quotes is, \u201cDesign is the art of gradually applying constraints until only one solution remains,\u201d from that most prolific author, \u201cUnknown\u201d. I\u2019ve always viewed the Web Content Accessibility Guidelines as people-based constraints, and liked how they help the design process. With these 17 new web content accessibility criteria, go forth and create solutions that more people than ever before can use.\nSpending those book vouchers you got for Christmas\nWhat next? If you\u2019re looking for something to do to keep you busy this Christmas, I thoroughly recommend these four books for increasing your accessibility expertise:\n\n\u201cPro HTML5 Accessibility\u201d by Joshue O Connor (Head of Accessibility (Interim) at the UK Government Digital Service, Director of InterAccess, and one of the editors of the Web Content Accessibility Guidelines 2.1): Although this book is six years old\u2014a long time in web design\u2014I find it an excellent go-to resource. It begins by explaining how people with disabilities use the web, and then expertly explains modern HTML in that context.\n\u201cA Web for Everyone\u2014Designing Accessible User Experiences\u201d by Sarah Horton (the Paciello Group\u2019s UX Strategy Lead) and Whitney Quesenbery (the Center for Civic Design\u2019s co-director): This book covers the Web Content Accessibility Guidelines 2.0, the principles of Universal Design, and design thinking. Its personas for Accessible UX and its profiles of well-known industry figures\u2014including some 24ways authors\u2014keep its content practical and relevant throughout.\n\u201cAccessibility For Everyone\u201d by Laura Kalbag (Ind.ie\u2019s co-founder and designer, and 24ways author): This book is just over a year old, and so serves as a great resource for up-to-date coverage of guidelines, laws, and accessibility features of operating systems\u2014as well as content, design, coding, and testing. The audiobook, which Laura narrates, can help you and your colleagues go from having little or no understanding of web accessibility, to becoming familiar with all aspects of web accessibility\u2014in less than four hours.\n\u201cJust Ask: Integrating Accessibility Throughout Design\u201d by Shawn Lawton Henry (the World Wide Web Consortium (W3C)\u2019s Web Accessibility Initiative (WAI)\u2019s Outreach Coordinator): Although this book is 11\u00bd years old, the way it presents accessibility as part of the User-Centered Design process is timeless. I found its section on Usability Testing with people with disabilities particularly useful.", "year": "2018", "author": "Alan Dalton", "author_slug": "alandalton", "published": "2018-12-03T00:00:00+00:00", "url": "https://24ways.org/2018/wcag-for-people-who-havent-read-the-update/", "topic": "ux"} {"rowid": 244, "title": "It\u2019s Beginning to Look a Lot Like XSSmas", "contents": "I dread the office Secret Santa. I have a knack for choosing well-meaning but inappropriate presents, like a bottle of port for a teetotaller, a cheese-tasting experience for a vegan, or heaven forbid, Spurs socks for an Arsenal supporter. Ok, the last one was intentional.\nIt\u2019s the same with gifting code. Once, I made a pattern library for A List Apart which I open sourced, and a few weeks later, a glaring security vulnerability was found in it. My gift was so generous that it enabled unrestricted access to any file on any public-facing server that hosted it.\nWith platforms like GitHub and npm, giving the gift of code is so easy it\u2019s practically a no-brainer. This giant, open source yankee swap helps us do our jobs without starting from scratch with every project. But like any gift-giving, it\u2019s also risky.\nVulnerabilities and Open Source\nOpen source code is not inherently more or less vulnerable than closed-source code. What makes it higher risk is that the same piece of code gets reused in lots of places, meaning a hacker can use the same exploit mechanism on the same vulnerable code in different apps.\nGraph showing the number of open source vulnerabilities published per year, from the State of Open Source Security 2017\nIn the first 24 ways article this year, Katie referenced a few different types of vulnerability:\n\nCross-site Request Forgery (also known as CSRF)\nSQL Injection\nCross-site Scripting (also known as XSS)\n\nThere are many more types of vulnerability, and those that live under the same category share similarities. \nFor example, my favourite \u2013 is it weird to have a favourite vulnerability? \u2013 is Cross Site Scripting (XSS), which allows for the injection of scripts into web pages. This is a really common vulnerability often unwittingly added by developers. OWASP (the Open Web Application Security Project) wrote a great article about how to prevent opening the door to XSS attacks \u2013 share it generously with your colleagues.\nMost vulnerabilities like this are not added intentionally \u2013 they\u2019re doors left ajar due to the way something has been scripted, like the over-generous code in my pattern library. \nOthers, though, are added intentionally. A few months ago, a hacker, disguised as a helpful elf, offered to take over the maintenance of a popular npm package that had been unmaintained for a couple of years. The owner had moved onto other projects, and was keen to see it continue to be maintained by someone else, so transferred ownership. Fast-forward 3 months, it was discovered that the individual had quietly added a malicious package to the codebase, and the obfuscated code in it had been unwittingly installed onto thousands of apps. The code added was designed to harvest Bitcoin if it was run alongside another application. It was only spotted due to a developer\u2019s curiosity.\nAnother tactic to get developers to unwittingly install malicious packages into their codebase is \u201ctyposquatting\u201d \u2013 back in August last year, npm reported that a user had been publishing packages with very similar names to popular packages (for example, crossenv instead of cross-env). \nThis is a big wakeup call for open source maintainers. Techniques like this are likely to be used more as the maintenance of open source libraries becomes an increasing burden to their owners. After all, starting a new project often has a greater reward than maintaining an existing one, but remember, an open source library is for life, not just for Christmas.\nSanta\u2019s on his sleigh\nIf you use open source libraries, chances are that these libraries also use open source libraries. Your app may only have a handful of dependencies, but tucked in the back of that sleigh may be a whole extra sack of dependencies known as deep dependencies (ones that you didn\u2019t directly install, but are dependencies of that dependency), and these can contain vulnerabilities too.\nLet\u2019s look at the npm package santa as an example. santa has 8 direct dependencies listed on npm. That seems pretty manageable. But that\u2019s just the tip of the iceberg \u2013 have a look at the full dependency tree which contains 109 dependencies \u2013 more dependencies than there are Christmas puns in this article. Only one of these direct dependencies has a vulnerability (at the time of writing), but there are actually 13 other known vulnerabilities in santa, which have been introduced through its deeper dependencies.\nFixing vulnerabilities \u2013 the ultimate christmas gift\nIf you\u2019re a maintainer of open source libraries, taking good care of them is the ultimate gift you can give. Keep your dependencies up to date, use a security tool to monitor and alert you when new vulnerabilities are found in your code, and fix or patch them promptly. This will help keep the whole open source ecosystem healthy.\nWhen you find out about a new vulnerability, you have some options:\nFix the vulnerability via an upgrade\nYou can often fix a vulnerability by upgrading the library to the latest version. Make sure you\u2019re using software that monitors your dependencies for new security issues and lets you know when a fix is ready, otherwise you may be unwittingly using a vulnerable version.\nPatch the vulnerable code\nSometimes, a fix for a vulnerable library isn\u2019t possible. This is often the case when a library is no longer being maintained, or the version of the library being used might be so out of date that upgrading it would cause a breaking change. Patches are bits of code that will fix that particular issue, but won\u2019t change anything else.\nSwitch to a different library\nIf the library you\u2019re using has no fix or patch, you may be better of switching it out for another one, particularly if it looks like it\u2019s being unmaintained.\nResponsibly disclosing vulnerabilities\nKnowing how to responsibly disclose vulnerabilities is something I\u2019m ashamed to admit that I didn\u2019t know about before I joined a security company. But it\u2019s so important! On discovering a new vulnerability, a developer has a few options: \n\nA malicious developer will exploit that vulnerability for their own gain. \nA reckless (or inexperienced) developer will disclose that vulnerability to the world without following a responsible disclosure process. This opens the door to an unethical developer exploiting the vulnerability. At Snyk, we monitor social media for mentions of newly found vulnerabilities so we can add them to our database and share fixes before they get exploited.\nAn ethical and aware developer will follow what\u2019s known as a \u201cresponsible disclosure process\u201d. They will contact the maintainer of the code privately, allowing reasonable time for them to release a fix for the issue and to give others who use that vulnerable code a chance to fix it too.\n\nIt\u2019s important to understand this process if you\u2019re a maintainer or contributor of code. It can be daunting when a report comes in, but understanding and following the right steps will help reduce the risk to the people who use that code.\nSo what does responsible disclosure look like? I\u2019ll take Node.js\u2019s security disclosure policy as an example. They ask that all security issues that are found in Node.js are reported there. (There\u2019s a separate process for bug found in third-party npm packages). Once you\u2019ve reported a vulnerability, they promise to acknowledge it within 24 hours, and to give a more detailed response within 48 hours. If they find that the issue is indeed a security bug, they\u2019ll give you regular updates about the progress they\u2019re making towards fixing it. As part of this, they\u2019ll figure out which versions are affected, and prepare fixes for them. They\u2019ll assign the vulnerability a CVE (Common Vulnerabilities and Exposures) ID and decide on an embargo date for public disclosure. On the date of the embargo, they announce the vulnerability in their Node.js security mailing list and deploy fixes to nodejs.org.\nTim Kadlec published an in-depth article about responsible disclosures if you\u2019re interested in knowing more. It has some interesting horror stories of what happened when the disclosure process was not followed.\nEncourage responsible disclosure\nAdd a SECURITY.md file to your project so someone who wants to message you about a vulnerability can do so without having to hunt around for contact details. Last year, Snyk published a State of Open Source Security report that found 79.5% of maintainers do not have a public disclosure policy. Those that did were considerably more likely to get notified privately about a vulnerability \u2013 73% of maintainers who had one had been notified, vs 21% of maintainers who hadn\u2019t published one one.\nStats from the State of Open Source Security 2017\nBug bounties\nSome companies run bug bounties to encourage the responsible disclosure of vulnerabilities. By offering a reward for finding and safely disclosing a vulnerability, it also reduces the enticement of exploiting a vulnerability over reporting it and getting a quick cash reward. Hackerone is a community of ethical hackers who pentest apps that have signed up for the scheme and get paid when they find a new vulnerability. Wordpress is one such participant, and you can see the long list of vulnerabilities that have been disclosed as part of that program.\nIf you don\u2019t have such a bounty, be prepared to get the odd vulnerability extortion email. Scott Helme, who founded securityheaders.com and report-uri.com, wrote a post about some of the requests he gets for a report about a critical vulnerability in exchange for money. \n\nOn one hand, I want to be as responsible as possible and if my users are at risk then I need to know and patch this issue to protect them. On the other hand this is such irresponsible and unethical behaviour that interacting with this person seems out of the question.\n\nA gift worth giving\nIt\u2019s time to brush the dust off those old gifts that we shared and forgot about. Practice good hygiene and run them through your favourite security tool \u2013 I\u2019m just a little biased towards Snyk, but as Katie mentioned, there\u2019s also npm audit if you use Node.js, and most source code managers like GitHub and GitLab have basic vulnerability alert capabilities.\nStats from the State of Open Source Security 2017\nMost importantly, patch or upgrade away those vulnerabilities away, and if you want to share that Christmas spirit, open fixes for your favourite open source projects, too.", "year": "2018", "author": "Anna Debenham", "author_slug": "annadebenham", "published": "2018-12-17T00:00:00+00:00", "url": "https://24ways.org/2018/its-beginning-to-look-a-lot-like-xssmas/", "topic": "code"} {"rowid": 243, "title": "Researching a Property in the CSS Specifications", "contents": "I frequently joke that I\u2019m \u201creading the specs so you don\u2019t have to\u201d, as I unpack some detail of a CSS spec in a post on my blog, some documentation for MDN, or an article on Smashing Magazine. However waiting for someone like me to write an article about something is a pretty slow way to get the information you need. Sometimes people like me get things wrong, or specifications change after we write a tutorial. \nWhat if you could just look it up yourself? That\u2019s what you get when you learn to read the CSS specifications, and in this article my aim is to give you the basic details you need to grab quick information about any CSS property detailed in the CSS specs.\nWhere are the CSS Specifications?\nThe easiest way to see all of the CSS specs is to take a look at the Current Work page in the CSS section of the W3C Website. Here you can see all of the specifications listed, the level they are at and their status. There is also a link to the specification from this page. I explained CSS Levels in my article Why there is no CSS 4.\nWho are the specifications for?\nCSS specifications are for everyone who uses CSS. You might be a browser engineer - referred to as an implementor - needing to know how to implement a feature, or a web developer - referred to as an author - wanting to know how to use the feature. The fact that both parties are looking at the same document hopefully means that what the browser displays is what the web developer expected.\nWhich version of a spec should I look at?\nThere are a couple of places you might want to look. Each published spec will have the latest published version, which will have TR in the URL and can be accessed without a date (which is always the newest version) or at a date, which will be the date of that publication. If I\u2019m referring to a particular Working Draft in an article I\u2019ll typically link to the dated version. That way if the information changes it is possible for someone to see where I got the information from at the time of writing.\nIf you want the very latest additions and changes to the spec, then the Editor\u2019s Draft is the place to look. This is the version of the spec that the editors are committing changes to. If I make a change to the Multicol spec and push it to GitHub, within a few minutes that will be live in the Editor\u2019s Draft. So it is possible there are errors, bits of text that we are still working out and so on. The Editor\u2019s Draft however is definitely the place to look if you are wanting to raise an issue on a spec, as it may be that the issue you are about to raise is already fixed.\nIf you are especially keen on seeing updates to specifications keep an eye on https://drafts.csswg.org/ as this is a list of drafts, along with the date they were last updated.\nHow to approach a spec\nThe first thing to understand is that most CSS Specifications start with the most straightforward information, and get progressively further into the weeds. For an author the initial examples and explanations are likely to be of interest, and then the property definitions and examples. Therefore, if you are looking at a vast spec, know that you probably won\u2019t need to read all the way to the bottom, or read every section in detail.\nThe second thing that is useful to know about modern CSS specifications is how modularized they are. It really never is a case of finding everything you need in a single document. If we tried to do that, there would be a lot of repetition and likely inconsistency between specs. There are some key specifications that many other specifications draw on, such as:\n\nValues and Units\nIntrinsic and Extrinsic Sizing\nBox Alignment\n\nWhen something is defined in another specification the spec you are reading will link to it, so it is worth opening that other spec in a new tab in order that you can refer back to it as you explore.\nResearching your property\nAs an example we will take a look at the property grid-auto-rows, this property defines row tracks in the implicit grid when using CSS Grid Layout. The first thing you will need to do is find out which specification defines this property.\nYou might already know which spec the property is part of, and therefore you could go directly to the spec and search using your browser or look in the navigation for the spec to find it. Alternatively, you could take a look at the CSS Property Index, which is an automatically generated list of CSS Properties.\nClicking on a property will take you to the TR version of the spec, the latest published draft, and the definition of that property in it. This definition begins with a panel detailing the syntax of this property. For grid-auto-rows, you can see that it is listed along with grid-auto-columns as these two properties are essentially identical. They take the same values and work in the same way, one for rows and the other for columns.\nValue\nFor value we can see that the property accepts a value . The next thing to do is to find out what that actually means, clicking will take you to where it is defined in the Grid spec.\nThe value is defined as accepting various values:\n\n\nminmax( , )\nfit-content( \n\nWe need to head down the rabbit hole to find out what each of these mean. From here we essentially go down line by line until we have unpacked the value of track-size.\n is defined just below as:\n\n\n\nmin-content\nmax-content\nauto\n\nSo these are all things that would be valid to use as a value for grid-auto-rows.\nThe first value of is something you will see in many specifications as a value. It means that you can use a length unit - for example px or em - or a percentage. Some properties only accept a in which case you know that you cannot use a percentage as the value. This means that you could have grid-auto-rows with any of the following values.\ngrid-auto-rows: 100px;\ngrid-auto-rows: 1em;\ngrid-auto-rows: 30%;\nWhen using percentages, it is important to know what it is a percentage of. As a percentage has to resolve from something. There is text in the spec which explains how column and row percentages work.\n\n\u201c values are relative to the inline size of the grid container in column grid tracks, and the block size of the grid container in row grid tracks.\u201d\n\nThis means that in a horizontal writing mode such as when using English, a percentage when used as a track-size in grid-auto-columns would be a percentage of the width of the grid, and a percentage in grid-auto-rows a percentage of the height of the grid.\nThe second value of is also defined here, as \u201cA non-negative dimension with the unit fr specifying the track\u2019s flex factor.\u201d This is the fr unit, and the spec links to a fuller definition of fr as this unit is only used in Grid Layout so it is therefore defined in the grid spec. We now know that a valid value would be:\ngrid-auto-rows: 1fr;\nThere is some useful information about the fr unit in this part of the spec. It is noted that the fr unit has an automatic minimum. This means that 1fr is really minmax(auto, 1fr). This is why having a number of tracks all at 1fr does not mean that all are equal sized, as a larger item in any of the tracks would have a large auto size and therefore would be larger after spare space had been distributed.\nWe then have min-content and max-content. These keywords can be used for track sizing and the specification defines what they mean in the context of sizing a track, representing the min and max-sizing contributions of the grid tracks. You will see that there are various terms linked in the definition, so if you do not know what these mean you can follow them to find out.\nFor example the spec links max-content contribution to the CSS Intrinsic and Extrinsic Sizing specification. This is one of those specs which is drawn on by many other specifications. If we follow that link we can read the definition there and follow further links to understand what each term means. The more that you read specifications the more these terms will become familiar to you. Just like learning a foreign language, at first you feel like you have to look up every little thing. After a while you remember the vocabulary.\nWe can now add min-content and max-content to our available values.\ngrid-auto-rows: min-content;\ngrid-auto-rows: max-content;\nThe final item in our list is auto. If you are familiar with using Grid Layout, then you are probably aware that an auto sized track for will grow to fit the content used. There is an interesting note here in the spec detailing that auto sized rows will stretch to fill the grid container if there is extra space and align-content or justify-content have a value of stretch. As stretch is the default value, that means these tracks stretch by default. Tracks using other types of length will not behave like this.\ngrid-auto-rows: auto;\nSo, this was the list for , the next possible value is minmax( , ). So this is telling us that we can use minmax() as a value, the final (max) value will be and we have already unpacked all of the allowable values there. The first value (min) is detailed as an . If we look at the values for this, we discover that they are the same as , minus the value:\n\n\nmin-content\nmax-content\nauto\n\nWe already know what all of these do, so we can add possible minmax() values to our list of values for .\ngrid-auto-rows: minmax(100px, 200px);\ngrid-auto-rows: minmax(20%, 1fr);\ngrid-auto-rows: minmax(1em, auto);\ngrid-auto-rows: minmax(min-content, max-content);\nFinally we can use fit-content( . We can see that fit-content takes a value of which we already know to be either a length unit, or a percentage. The spec details how fit-content is worked out, and it essentially allows a track which acts as if you had used the max-content keyword, however the track stops growing when it hits the length passed to it.\ngrid-auto-rows: fit-content(200px);\ngrid-auto-rows: fit-content(20%);\nThose are all of our possible values, and to round things off, check again at the initial value, you can see it has a little + sign next to it, click that and you will be taken to the CSS Values and Units module to find that, \u201cA plus (+) indicates that the preceding type, word, or group occurs one or more times.\u201d This means that we can pass a single track size to grid-auto-rows or multiple track sizes as a space separated list. Below the box is an explanation of what happens if you pass in more than one track size:\n\n\u201cIf multiple track sizes are given, the pattern is repeated as necessary to find the size of the implicit tracks. The first implicit grid track after the explicit grid receives the first specified size, and so on forwards; and the last implicit grid track before the explicit grid receives the last specified size, and so on backwards.\u201d\n\nTherefore with the following CSS, if five implicit rows were needed they would be as follows:\n\n100px\n1fr\nauto\n100px\n1fr\n\n.grid {\n display: grid;\n grid-auto-rows: 100px 1fr auto;\n}\nInitial\nWe can now move to the next line in the box, and you\u2019ll be glad to know that it isn\u2019t going to require as much unpacking! This simply defines the initial value for grid-auto-rows. If you do not specify anything, created rows will be auto sized. All CSS properties have an initial value that they will use if they are invoked as part of the usage of the specification they are in, but you do not set a value for them. In the case of grid-auto-rows it is used whenever rows are created in the implicit grid, so it needs to have a value to be used even if you do not set one.\nApplies to\nThis line tells us what this property is used for. Some properties are used in multiple places. For example if you look at the definition for justify-content in the Box Alignment specification you can see it is used in multicol containers, flex containers, and grid containers. In our case the property only applies for grid containers.\nInherited\nThis tells us if the property can be inherited from a parent element if it is not set. In the case of grid-auto-rows it is not inherited. A property such as color is inherited, so you do not need to set it on each element.\nPercentages\nAre percentages allowed for this property, and if so how are they calculated. In this case we are referred to the definition for grid-template-columns and grid-template-rows where we discover that the percentage is from the corresponding dimension of the content area.\nMedia\nThis defines the media group that the property belongs to. In this case visual.\nComputed Value\nThis details how the value is resolved. The grid-auto-rows property again refers to track sizing as defined for grid-template-columns and grid-template-rows, which tells us the computed value is as specified with lengths made absolute.\nCanonical Order\nIf you have a property\u2013generally a shorthand property\u2013which takes multiple values in a set order, then those values need to be serialized in the order detailed in the grammar for that property. In general you don\u2019t need to worry about this value in the table.\nAnimation Type\nThis details whether the property can be animated, and if so what type of animation. This is useful if you are trying to animate something and not getting the result that you expect. Note that just because something is listed in the spec as animatable does not mean that browsers will have implemented animation for that property yet!\nThat\u2019s (mostly) it!\nSometimes the property will have additional examples - there is one underneath the table for grid-auto-rows. These are worth looking at as they will highlight usage of the property that the spec editor has felt could use an example. There may also be some additional text explaining anythign specific to this property.\nIn selecting grid-auto-rows I chose a fairly complex property in terms of the work we needed to do to unpack the value. Many properties are far simpler than this. However ultimately, even when you come across a complex value, it really is just a case of stepping through the definitions until you come to the bottom of the rabbit hole.\nBeing able to work out what is valid for each property is incredibly useful. It means you don\u2019t waste time trying to use a value that doesn\u2019t work for that property. You also may find that there are values you weren\u2019t aware of, that solve problems for you.\nFurther reading\nSpecifications are not designed to be user manuals, and while they often contain examples, these are pretty terse as they need to be clear to demonstrate their particular point. The manual for the Web Platform is MDN Web Docs. Pairing reading a specification with the examples and information on an MDN property page such as the one for grid-auto-rows is a really great way to ensure that you have all the information and practical usage examples you might need.\nYou may also find useful:\n\nValue Definition Syntax on MDN.\nThe MDN Glossary defines many common terms.\nUnderstanding the CSS Property Value Syntax goes into more detail in terms of reading the syntax.\nHow to read W3C Specs - from 2001 but still relevant.\n\nI hope this article has gone some way to demystify CSS specifications for you. Even if the specifications are not your preferred first stop to learn about new CSS, being able to go directly to the source and avoid having your understanding filtered by someone else, can be very useful indeed.", "year": "2018", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2018-12-14T00:00:00+00:00", "url": "https://24ways.org/2018/researching-a-property-in-the-css-specifications/", "topic": "code"} {"rowid": 242, "title": "Creating My First Chrome Extension", "contents": "Writing a Chrome Extension isn\u2019t as scary at it seems!\nNot too long ago, I used a Chrome extension called 20 Cubed. I\u2019m far-sighted, and being a software engineer makes it difficult to maintain distance vision. So I used 20 Cubed to remind myself to look away from my screen and rest my eyes. I loved its simple interface and design. I loved it so much, I often forgot to turn it off in the middle of presentations, where it would take over my entire screen. Oops.\nUnfortunately, the developer stopped updating the extension and removed it from Chrome\u2019s extension library. I was so sad. None of the other eye rest extensions out there matched my design aesthetic, so I decided to create my own! Want to do the same?\nFortunately, Google has some respectable documentation on how to create an extension. And remember, Chrome extensions are just HTML, CSS, and JavaScript. You can add libraries and frameworks, or you can just code the \u201cold-fashioned\u201d way. Sky\u2019s the limit!\nSetup\nBut first, some things you\u2019ll need to know about before getting started:\n\nCallbacks\nTimeouts\nChrome Dev Tools\n\nDeveloping with Chrome extension methods requires a lot of callbacks. If you\u2019ve never experienced the joy of callback hell, creating a Chrome extension will introduce you to this concept. However, things can get confusing pretty quickly. I\u2019d highly recommend brushing up on that subject before getting started.\nHyperbole and a Half\nTimeouts and Intervals are another thing you might want to brush up on. While creating this extension, I didn\u2019t consider the fact that I\u2019d be juggling three timers. And I probably would\u2019ve saved time organizing those and reading up on the Chrome extension Alarms documentation beforehand. But more on that in a bit.\nOn the note of organization, abstraction is important! You might have any combination of the following:\n\nThe Chrome extension options page\nThe popup from the Chrome Menu\nThe windows or tabs you create\nThe background scripts\n\nAnd that can get unwieldy. You might also edit the existing tabs or windows in the browser, which you\u2019ll probably want as a separate script too. Note that this tutorial only covers creating your own customized window rather than editing existing windows or tabs.\nAlright, now that you know all that up front, let\u2019s get going!\nDocumentation\nTL;DR READ THE DOCS.\nA few things to get started:\n\nRead Google\u2019s primer on browser extensions\nHave a look at their Getting started tutorial\nCheck out their overview on Chrome Extensions\n\nThis overview discusses the Chrome extension files, architecture, APIs, and communication between pages. Funnily enough, I only discovered the Overview page after creating my extension.\nThe manifest.json file gives the browser information about the extension, including general information, where to find your extension files and icons, and API permissions required. Here\u2019s what my manifest.json looked like, for example:\nhttps://github.com/jennz0r/eye-rest/blob/master/manifest.json\nBecause I\u2019m a visual learner, I found the images that describe the extension\u2019s architecture most helpful.\n\nTo clarify this diagram, the background.js file is the extension\u2019s event handler. It\u2019s constantly listening for browser events, which you\u2019ll feed to it using the Chrome Extension API. Google says that an effective background script is only loaded when it is needed and unloaded when it goes idle.\nThe Popup is the little window that appears when you click on an extension\u2019s icon in the Chrome Menu. It consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under page_action: { \"default_popup\": FILE_NAME_HERE }.\nThe Options page is exactly as it says. This displays customizable options only visible to the user when they either right-click on the Chrome menu and choose \u201cOptions\u201d under an extension. This also consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under options_page: FILE_NAME_HERE.\nContent scripts are any scripts that will interact with any web windows or tabs that the user has open. These scripts will also interact with any tabs or windows opened by your extension.\nDebugging\nA quick note: don\u2019t forget the debugging tutorial!\nJust like any other Chrome window, every piece of an extension has an inspector and dev tools. If (read: when) you run into errors (as I did), you\u2019re likely to have several inspector windows open \u2013 one for the background script, one for the popup, one for the options, and one for the window or tab the extension is interacting with.\nFor example, I kept seeing the error \u201cThis request exceeds the MAX_WRITE_OPERATIONS_PER_HOUR quota.\u201d Well, it turns out there are limitations on how often you can sync stored information.\nAnother error I kept seeing was \u201cAlarm delay is less than minimum of 1 minutes. In released .crx, alarm \u201cALARM_NAME_HERE\u201d will fire in approximately 1 minutes\u201d. Well, it turns out there are minimum interval times for alarms.\nChrome Extension creation definitely benefits from debugging skills. Especially with callbacks and listeners, good old fashioned console.log can really help!\nMe adding a ton of `console.log`s while trying to debug my alarms.\nEye Rest Functionality\nOk, so what is the extension I created? Again, it\u2019s a way to rest your eyes every twenty minutes for twenty seconds. So, the basic functionality should look like the following:\n\nIf the extension is running AND\nIf the user has not clicked Pause in the Popup HTML AND\nIf the counter in the Popup HTML is down to 00:00 THEN\n\nOpen a new window with Timer HTML AND\nStart a 20 sec countdown in Timer HTML AND\nReset the Popup HTML counter to 20:00\n\nIf the Timer HTML is down to 0 sec THEN\n\nClose that window. Rinse. Repeat.\n\n\nSounds simple enough, but wow, these timers became convoluted! Of all the Chrome extensions I decided to create, I decided to make one that\u2019s heavily dependent on time, intervals, and having those in sync with each other. In other words, I made this unnecessarily complicated and didn\u2019t realize until I started coding.\nFor visual reference of my confusion, check out the GitHub repository for Eye Rest. (And yes, it\u2019s a pun.)\nAPI\nNow let\u2019s discuss the APIs that I used to build this extension.\nAlarms\nWhat even are alarms? I didn\u2019t know either.\nAlarms are basically Chrome\u2019s setTimeout and setInterval. They exist because, as Google says\u2026\n\nDOM-based timers, such as window.setTimeout() or window.setInterval(), are not honored in non-persistent background scripts if they trigger when the event page is dormant.\n\nFor more information, check out this background migration doc.\nOne interesting note about alarms in Chrome extensions is that they are persistent. Garbage collection with Chrome extension alarms seems unreliable at best. I didn\u2019t have much luck using the clearAll method to remove alarms I created on previous extension loads or installs. A workaround (read: hack) is to specify a unique alarm name every time your extension is loaded and clearing any other alarms without that unique name.\nBackground Scripts\nFor Eye Rest, I have two background scripts. One is my actual initializer and event listener, and the other is a helpers file.\nI wanted to share a couple of functions between my Background and Popup scripts. Specifically, the clearAndCreateAlarm function. I wanted my background script to clear any existing alarms, create a new alarm, and add remaining time until the next alarm to local storage immediately upon extension load. To make the function available to the Background script, I added helpers.js as the first item under background > scripts in my manifest.json.\nI also wanted my Popup script to do the same things when the user has unpaused the extension\u2019s functionality. To make the function available to the Popup script, I just include the helpers script in the Popup HTML file.\nOther APIs\nWindows\nI use the Windows API to create the Timer window when the time of my alarm is up. The window creation is initiated by my Background script.\nOne day, while coding late into the evening, I found it very confusing that the window.create method included url as an option. I assumed it was meant to be an external web address. A friend pondered that there must be an option to specify the window\u2019s HTML. Until then, it hadn\u2019t dawned on me that the url could be relative. Duh. I was tired!\nI pass the timer.html as the url option, as well as type, size, position, and other visual options.\nStorage\nMaybe you want to pass information back and forth between the Background script and your Popup script? You can do that using Chrome or local storage. One benefit of using local storage over Chrome\u2019s storage is avoiding quotas and write operation maximums.\nI wanted to pass the time at which the latest alarm was set, the time to the next alarm, and whether or not the timer is paused between the Background and Popup scripts. Because the countdown should change every second, it\u2019s quite complicated and requires lots of writes. That\u2019s why I went with the user\u2019s local storage. You can see me getting and setting those variables in my Background, Helper, and Popup scripts. Just search for date, nextAlarmTime, and isPaused.\nDeclarative Content\nThe Declarative Content API allows you to show your extension\u2019s page action based on several type of matches, without needing to take a host permission or inject a content script. So you\u2019ll need this to get your extension to work in the browser!\nYou can see me set this in my Background script. Because I want my extension\u2019s popup to appear on every page one is browsing, I leave the page matchers empty.\nThere are many more APIs for Chrome apps and extensions, so make sure to surf around and see what features are available!\nThe Extension\nHere\u2019s what my original Popup looked like before I added styles.\nAnd here\u2019s what it looks like with new styles. I guess I\u2019m going for a Nickelodeon feel.\nAnd here\u2019s the Timer window and Popup together! \nPublishing\nPublishing is a cinch. You just zip up your files, create a new or use an existing Google Developer account, upload the files, add some details, and pay a one time $5 fee. That\u2019s all! Then your extension will be available on the Chrome extension store! Neato :D\nMy extension is now available for you to install.\nConclusion\nI thought creating a time based Chrome Extension would be quick and easy. I was wrong. It was more complicated than I thought! But it\u2019s definitely achievable with some time, persistence, and good ole Google searches.\nEventually, I\u2019d like to add more interactive elements to Eye Rest. For example, hitting the YouTube API to grab a silly or cute video as a reward for looking away during the 20 sec countdown and not closing the timer window. This harkens back to one of my first web projects, Toothtimer, from 2012. Or maybe a way to change the background colors of the Timer and Popup!\nEither way, with Eye Rest\u2019s framework built out, I\u2019m feeling fearless about future feature adds! Building this Chrome extension took some broken nails, achy shoulders, and tired eyes, but now Eye Rest can tell me to give my eyes a break every 20 minutes.", "year": "2018", "author": "Jennifer Wong", "author_slug": "jenniferwong", "published": "2018-12-05T00:00:00+00:00", "url": "https://24ways.org/2018/my-first-chrome-extension/", "topic": "code"} {"rowid": 241, "title": "Jank-Free Image Loads", "contents": "There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then \u2014\nYour browser doesn\u2019t support HTML5 video. Here is\n a link to the video instead.\n\n\u2014 oops! \u2014 an image pops in above it, pushing said text down the page, and our poor reader loses their place.\nBy default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I\u2019ll chart a path into a jank-free future \u2013 one in which it\u2019s easy and natural to author elements that load like this:\nYour browser doesn\u2019t support HTML5 video. Here is\n a link to the video instead.\n\nJank is a very old problem, and there is a very old solution to it: the width and height attributes on . The idea is: if we stick an image\u2019s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives.\n\nwidth\nSpecifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network.\n\n\u2014The HTML 3.2 Specification, published on January 14 1997\nUnfortunately for us, when width and height were first spec\u2019d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird:\nSee the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen.\n\nwidth and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths.\nI have good news, bad news, and great news.\nThe good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them.\nThe bad news is that these techniques are all terrible, cumbersome hacks. They\u2019re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways.\nSo, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform.\naspect-ratio in CSS\nThe first proposed feature? An aspect-ratio property in CSS!\nThis would allow us to write CSS like this:\nimg {\n width: 100%;\n}\n\n.thumb {\n aspect-ratio: 1/1;\n}\n\n.hero {\n aspect-ratio: 16/9;\n}\nThis\u2019ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above.\nAlas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It\u2019s images \u2013 possibly user generated images \u2013 that can have any aspect ratio. The really tricky problem is unknown-when-you\u2019re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles:\n\nAnd inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style=\"\" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I\u2019m cutting off my, (or anyone else\u2019s) ability to change those aspect ratios, for whatever reason, later.\nHow might we specify aspect ratios at a lower level? How might we give browsers information about an image\u2019s dimensions, without giving them explicit instructions about how to style it?\nI\u2019ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio!\nA brief note on intrinsic and extrinsic sizing\nWhat do I mean by \u201cintrinsic\u201d and \u201cextrinsic?\u201d\nThe intrinsic size of an image is, put simply, how big it\u2019d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800\u00d7600 image has an intrinsic width of 800px.\nThe extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800\u00d7600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn\u2019t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px.\nIt surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity).\nCSS aspect-ratio lets us avoid setting extrinsic heights and widths \u2013 and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it.\nThe last tool I\u2019m going to talk about gets us out of the extrinsic sizing game all together \u2014 which, I think, is only appropriate for a feature that we\u2019re going to be using in HTML.\nintrinsicsize in HTML\nThe proposed intrinsicsize attribute will let you do this:\n\nThat tells the browser, \u201chey, this image.jpg that I\u2019m using here \u2013 I know you haven\u2019t loaded it yet but I\u2019m just going to let you know right away that it\u2019s going to have an intrinsic size of 800\u00d7600.\u201d This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image\u2019s intrinsic size.\nYou may ask (I did!): wait, what if my references multiple resources, which all have different intrinsic sizes? Well, if you\u2019re using srcset, intrinsicsize is a bit of a misnomer \u2013 what the attribute will do then, is specify an intrinsic aspect ratio:\n\nIn the future (and behind the \u201cExperimental Web Platform Features\u201d flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 \u2013 it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2.\nCan\u2019t wait\nI seem to have gotten myself into the weeds a bit. Sizing on the web is complicated!\nDon\u2019t let all of these details bury the big takeaway here: sometime soon (\ud83e\udd1e 2019\u203d \ud83e\udd1e), we\u2019ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can\u2019t wait!", "year": "2018", "author": "Eric Portis", "author_slug": "ericportis", "published": "2018-12-21T00:00:00+00:00", "url": "https://24ways.org/2018/jank-free-image-loads/", "topic": "code"} {"rowid": 240, "title": "My CSS Wish List", "contents": "I love Christmas. I love walking around the streets of London, looking at the beautifully decorated windows, seeing the shiny lights that hang above Oxford Street and listening to Christmas songs.\n\nI\u2019m not going to lie though. Not only do I like buying presents, I love receiving them too. I remember making long lists that I would send to Father Christmas with all of the Lego sets I wanted to get. I knew I could only get one a year, but I would spend days writing the perfect list.\n\nThe years have gone by, but I still enjoy making wish lists. And I\u2019ll tell you a little secret: my mum still asks me to send her my Christmas list every year.\n\nThis time I\u2019ve made my CSS wish list. As before, I\u2019d be happy with just one present.\n\nBefore I begin\u2026\n\n\u2026 this list includes:\n\n\n\tthings that don\u2019t exist in the CSS specification (if they do, please let me know in the comments \u2013 I may have missed them);\n\tothers that are in the spec, but it\u2019s incomplete or lacks use cases and examples (which usually means that properties haven\u2019t been implemented by even the most recent browsers).\n\n\nLike with any other wish list, the further down I go, the more unrealistic my expectations \u2013 but that doesn\u2019t mean I can\u2019t wish. Some of the things we wouldn\u2019t have thought possible a few years ago have been implemented and our wishes fulfilled (think multiple backgrounds, gradients and transformations, for example).\n\nThe list\n\nCross-browser implementation of font-size-adjust\n\nWhen one of the fall-back fonts from your font stack is used, rather than the preferred (first) one, you can retain the aspect ratio by using this very useful property. It is incredibly helpful when the fall-back fonts are smaller or larger than the initial one, which can make layouts look less polished.\n\nWhat font-size-adjust does is divide the original font-size of the fall-back fonts by the font-size-adjust value. This preserves the x-height of the preferred font in the fall-back fonts. Here\u2019s a simple example:\n\np {\n font-family: Calibri, \"Lucida Sans\", Verdana, sans-serif;\n font-size-adjust: 0.47;\n}\n\nIn this case, if the user doesn\u2019t have Calibri installed, both Lucida Sans and Verdana will keep Calibri\u2019s aspect ratio, based on the font\u2019s x-height. This property is a personal favourite and one I keep pointing to.\n\nFirefox supported this property from version three. So far, it\u2019s the only browser that does. Fontdeck provides the font-size-adjust value along with its fonts, and has a handy tool for calculating it.\n\nMore control over overflowing text\n\nThe text-overflow property lets you control text that overflows its container. The most common use for it is to show an ellipsis to indicate that there is more text than what is shown. To be able to use it, the container should have overflow set to something other than visible, and white-space: nowrap:\n\ndiv {\n white-space: nowrap;\n width: 100%;\n overflow: hidden;\n text-overflow: ellipsis;\n}\n\nThis, however, only works for blocks of text on a single line. In the wish list of many CSS authors (and in mine) is a way of defining text-overflow: ellipsis on a block of multiple text lines. Opera has taken the first step and added support for the -o-ellipsis-lastline property, which can be used instead of ellipsis. This property is not part of the CSS3 spec, but we could certainly make good use of it if it were\u2026\n\nWebKit has -webkit-line-clamp to specify how many lines to show before cutting with an ellipsis, but support is patchy at best and there is no control over where the ellipsis shows in the text. Many people have spent time wrangling JavaScript to do this for us, but the methods used are very processor intensive, and introduce a JavaScript dependency.\n\nIndentation and hanging punctuation properties\n\nYou might notice a trend here: almost half of the items in this list relate to typography. The lack of fine-grained control over typographical detail is a general concern among designers and CSS authors. Indentation and hanging punctuation fall into this category.\n\nThe CSS3 specification introduces two new possible values for the text-indent property: each-line; and hanging. each-line would indent the first line of the block container and each line after a forced line break; hanging would invert which lines are affected by the indentation.\n\nThe proposed hanging-punctuation property would allow us to specify whether opening and closing brackets and quotes should hang outside the edge of the first and last lines. The specification is still incomplete, though, and asks for more examples and use cases.\n\nText alignment and hyphenation properties\n\nFollowing the typographic trend of this list, I\u2019d like to add better control over text alignment and hyphenation properties. The CSS3 module on Generated Content for Paged Media already specifies five new hyphenation-related properties (namely: hyphenate-dictionary; hyphenate-before and hyphenate-after; hyphenate-lines; and hyphenate-character), but it is still being developed and lacks examples.\n\nIn the text alignment realm, the new text-align-last property allows you to define how the last line of a block (or a line just before a forced break) is aligned, if your text is set to justify. Its value can be: start; end; left; right; center; and justify. The text-justify property should also allow you to have more control over text set to text-align: justify but, for now, only Internet Explorer supports this.\n\ncalc()\n\nThis is probably my favourite item in the list: the calc() function. This function is part of the CSS3 Values and Units module, but it has only been implemented by Firefox (4.0). To take advantage of it now you need to use the Mozilla vendor code, -moz-calc().\n\nImagine you have a fluid two-column layout where the sidebar column has a fixed width of 240 pixels, and the main content area fills the rest of the width available. This is how you could create that using -moz-calc():\n\n#main {\n width: -moz-calc(100% - 240px);\n}\n\nCan you imagine how many hacks and headaches we could avoid were this function available in more browsers? Transitions and animations are really nice and lovely but, for me, it\u2019s the ability to do the things that calc() allows you to that deserves the spotlight and to be pushed for implementation.\n\nSelector grouping with -moz-any()\n\nThe -moz-any() selector grouping has been introduced by Mozilla but it\u2019s not part of any CSS specification (yet?); it\u2019s currently only available on Firefox 4.\n\nThis would be especially useful with the way HTML5 outlines documents, where we can have any number of variations of several levels of headings within numerous types of containers (think sections within articles within sections\u2026).\n\nHere is a quick example (copied from the Mozilla blog post about the article) of how -moz-any() works. Instead of writing:\n\nsection section h1, section article h1, section aside h1,\nsection nav h1, article section h1, article article h1,\narticle aside h1, article nav h1, aside section h1,\naside article h1, aside aside h1, aside nav h1, nav section h1,\nnav article h1, nav aside h1, nav nav h1, {\n font-size: 24px;\n}\n\nYou could simply write:\n\n-moz-any(section, article, aside, nav)\n-moz-any(section, article, aside, nav) h1 {\n font-size: 24px;\n}\n\nNice, huh?\n\nMore control over styling form elements\n\nSome are of the opinion that form elements shouldn\u2019t be styled at all, since a user might not recognise them as such if they don\u2019t match the operating system\u2019s controls. I partially agree: I\u2019d rather put the choice in the hands of designers and expect them to be capable of deciding whether their particular design hampers or improves usability.\n\nI would say the same idea applies to font-face: while some fear designers might go crazy and litter their web pages with dozens of different fonts, most welcome the freedom to use something other than Arial or Verdana.\n\nThere will always be someone who will take this freedom too far, but it would be useful if we could, for example, style the default Opera date picker:\n\n\n\n\n\nor Safari\u2019s slider control (think star movie ratings, for example):\n\n\n\n\n\nParent selector\n\nI don\u2019t think there is one CSS author out there who has never come across a case where he or she wished there was a parent selector. There have been many suggestions as to how this could work, but a variation of the child selector is usually the most popular:\n\narticle < h1 {\n\u2026\n}\n\nOne can dream\u2026\n\nFlexible box layout\n\nThe Flexible Box Layout Module sounds a bit like magic: it introduces a new box model to CSS, allowing you to distribute and order boxes inside other boxes, and determine how the available space is shared.\n\nTwo of my favourite features of this new box model are:\n\n\n\tthe ability to redistribute boxes in a different order from the markup\n\tthe ability to create flexible layouts, where boxes shrink (or expand) to fill the available space\n\n\nLet\u2019s take a quick look at the second case. Imagine you have a three-column layout, where the first column takes up twice as much horizontal space as the other two:\n\n\n
    \n
    \n
    \n
    \n \n\n\nWith the flexible box model, you could specify it like this:\n\nbody {\n display: box;\n box-orient: horizontal;\n}\n#main {\n box-flex: 2;\n}\n#links {\n box-flex: 1;\n}\naside {\n box-flex: 1;\n}\n\nIf you decide to add a fourth column to this layout, there is no need to recalculate units or percentages, it\u2019s as easy as that.\n\nBrowser support for this property is still in its early stages (Firefox and WebKit need their vendor prefixes), but we should start to see it being gradually introduced as more attention is drawn to it (I\u2019m looking at you\u2026). You can read a more comprehensive write-up about this property on the Mozilla developer blog.\n\nIt\u2019s easy to understand why it\u2019s harder to start playing with this module than with things like animations or other more decorative properties, which don\u2019t really break your layouts when users don\u2019t see them. But it\u2019s important that we do, even if only in very experimental projects.\n\nNested selectors\n\nAnyone who has never wished they could do something like the following in CSS, cast the first stone:\n\narticle {\n h1 { font-size: 1.2em; }\n ul { margin-bottom: 1.2em; }\n}\n\nEven though it can easily turn into a specificity nightmare and promote redundancy in your style sheets (if you abuse it), it\u2019s easy to see how nested selectors could be useful. CSS compilers such as Less or Sass let you do this already, but not everyone wants or can use these compilers in their projects.\n\nEvery wish list has an item that could easily be dropped. In my case, I would say this is one that I would ditch first \u2013 it\u2019s the least useful, and also the one that could cause more maintenance problems. But it could be nice.\n\nImplementation of the ::marker pseudo-element\n\nThe CSS Lists module introduces the ::marker pseudo-element, that allows you to create custom list item markers. When an element\u2019s display property is set to list-item, this pseudo-element is created.\n\nUsing the ::marker pseudo-element you could create something like the following:\n\nFootnote 1: Both John Locke and his father, Anthony Cooper, are\nnamed after 17th- and 18th-century English philosophers; the real\nAnthony Cooper was educated as a boy by the real John Locke.\nFootnote 2: Parts of the plane were used as percussion instruments\nand can be heard in the soundtrack.\n\nwhere the footnote marker is generated by the following CSS:\n\nli::marker {\n content: \"Footnote \" counter(notes) \":\";\n text-align: left;\n width: 12em;\n}\nli {\n counter-increment: notes;\n}\n\nYou can read more about how to use counters in CSS in my article from last year.\n\nBear in mind that the CSS Lists module is still a Working Draft and is listed as \u201cLow priority\u201d. I did say this wish list would start to grow more unrealistic closer to the end\u2026\n\nVariables\n\nThe sight of the word \u2018variables\u2019 may make some web designers shy away, but when you think of them applied to things such as repeated colours in your stylesheets, it\u2019s easy to see how having variables available in CSS could be useful.\n\nThink of a website where the main brand colour is applied to elements like the main text, headings, section backgrounds, borders, and so on. In a particularly large website, where the colour is repeated countless times in the CSS and where it\u2019s important to keep the colour consistent, using variables would be ideal (some big websites are already doing this by using server-side technology).\n\nAgain, Less and Sass allow you to use variables in your CSS but, again, not everyone can (or wants to) use these.\n\nIf you are using Less, you could, for instance, set the font-family value in one variable, and simply call that variable later in the code, instead of repeating the complete font stack, like so:\n\n@fontFamily: Calibri, \"Lucida Grande\", \"Lucida Sans Unicode\", Helvetica, Arial, sans-serif;\nbody {\n font-family: @fontFamily;\n}\n\nOther features of these CSS compilers might also be useful, like the ability to \u2018call\u2019 a property value from another selector (accessors):\n\nheader {\n background: #000000;\n}\nfooter {\n background: header['background'];\n}\n\nor the ability to define functions (with arguments), saving you from writing large blocks of code when you need to write something like, for example, a CSS gradient:\n\n.gradient (@start:\"\", @end:\"\") {\n background: -webkit-gradient(linear, left top, left bottom, from(@start), to(@end));\n background: -moz-linear-gradient(-90deg,@start,@end);\n}\nbutton {\n .gradient(#D0D0D0,#9F9F9F);\n}\n\nStandardised comments\n\nEach CSS author has his or her own style for commenting their style sheets. While this isn\u2019t a massive problem on smaller projects, where maybe only one person will edit the CSS, in larger scale projects, where dozens of hands touch the code, it would be nice to start seeing a more standardised way of commenting.\n\nOne attempt at creating a standard for CSS comments is CSSDOC, an adaptation of Javadoc (a documentation generator that extracts comments from Java source code into HTML). CSSDOC uses \u2018DocBlocks\u2019, a term borrowed from the phpDocumentor Project. A DocBlock is a human- and machine-readable block of data which has the following structure:\n\n/**\n * Short description\n *\n * Long description (this can have multiple lines and contain

    tags\n *\n * @tags (optional)\n */\n\nCSSDOC includes a standard for documenting bug fixes and hacks, colours, versioning and copyright information, amongst other important bits of data.\n\nI know this isn\u2019t a CSS feature request per se; rather, it\u2019s just me pointing you at something that is usually overlooked but that could contribute towards keeping style sheets easier to maintain and to hand over to new developers.\n\nFinal notes\n\nI understand that if even some of these were implemented in browsers now, it would be a long time until all vendors were up to speed. But if we don\u2019t talk about them and experiment with what\u2019s available, then it will definitely never happen.\n\nWhy haven\u2019t I mentioned better browser support for existing CSS3 properties? Because that would be the same as adding chocolate to your Christmas wish list \u2013 you don\u2019t need to ask, everyone knows you want it.\n\nThe list could go on. There are dozens of other things I would love to see integrated in CSS or further developed. These are my personal favourites: some might be less useful than others, but I\u2019ve wished for all of them at some point.\n\nPart of the research I did while writing this article was asking some friends what they would add to their lists; other than a couple of items I already had in mine, everything else was different. I\u2019m sure your list would be different too. So tell me, what\u2019s on your CSS wish list?", "year": "2010", "author": "Inayaili de Le\u00f3n Persson", "author_slug": "inayailideleon", "published": "2010-12-03T00:00:00+00:00", "url": "https://24ways.org/2010/my-css-wish-list/", "topic": "code"} {"rowid": 239, "title": "Using the WebFont Loader to Make Browsers Behave the Same", "contents": "Web fonts give us designers a whole new typographic palette with which to work. However, browsers handle the loading of web fonts in different ways, and this can lead to inconsistent user experiences.\n\nSafari, Chrome and Internet Explorer leave a blank space in place of the styled text while the web font is loading. Opera and Firefox show text with the default font which switches over when the web font has loaded, resulting in the so-called Flash of Unstyled Text (aka FOUT). Some people prefer Safari\u2019s approach as it eliminates FOUT, others think the Firefox way is more appropriate as content can be read whilst fonts download. Whatever your preference, the WebFont Loader can make all browsers behave the same way.\n\nThe WebFont Loader is a JavaScript library that gives you extra control over font loading. It was co-developed by Google and Typekit, and released as open source. The WebFont Loader works with most web font services as well as with self-hosted fonts.\n\nThe WebFont Loader tells you when the following events happen as a browser downloads web fonts (or loads them from cache):\n\n\n\twhen fonts start to download (\u2018loading\u2019)\n\twhen fonts finish loading (\u2018active\u2019)\n\tif fonts fail to load (\u2018inactive\u2019)\n\n\nIf your web page requires more than one font, the WebFont Loader will trigger events for individual fonts, and for all the fonts as a whole. This means you can find out when any single font has loaded, and when all the fonts have loaded (or failed to do so).\n\nThe WebFont Loader notifies you of these events in two ways: by applying special CSS classes when each event happens; and by firing JavaScript events. For our purposes, we\u2019ll be using just the CSS classes.\n\nImplementing the WebFont Loader\n\nAs stated above, the WebFont Loader works with most web font services as well as with self-hosted fonts.\n\nSelf-hosted fonts\n\nTo use the WebFont Loader when you are hosting the font files on your own server, paste the following code into your web page:\n\n\n\nReplace Font Family Name and Another Font Family with a comma-separated list of the font families you want to check against, and replace http://yourwebsite.com/styles.css with the URL of the style sheet where your @font-face rules reside.\n\nFontdeck\n\nAssuming you have added some fonts to a website project in Fontdeck, use the afore-mentioned code for self-hosted solutions and replace http://yourwebsite.com/styles.css with the URL of the tag in your Fontdeck website settings page. It will look something like http://f.fontdeck.com/s/css/xxxx/domain/nnnn.css.\n\nTypekit\n\nTypekit\u2019s JavaScript-based implementation incorporates the WebFont Loader events by default, so you won\u2019t need to include any WebFont Loader code.\n\nMaking all browsers behave like Safari\n\nTo make Firefox and Opera work in the same way as WebKit browsers (Safari, Chrome, etc.) and Internet Explorer, and thus minimise FOUT, you need to hide the text while the fonts are loading.\n\nWhile fonts are loading, the WebFont Loader adds a class of wf-loading to the element. Once the fonts have loaded, the wf-loading class is removed and replaced with a class of wf-active (or wf-inactive if all of the fonts failed to load). This means you can style elements on the page while the fonts are loading and then style them differently when the fonts have finished loading.\n\nSo, let\u2019s say the text you need to hide while fonts are loading is contained in all paragraphs and top-level headings. By writing the following style rule into your CSS, you can hide the text while the fonts are loading:\n\n.wf-loading h1, .wf-loading p {\n\tvisibility:hidden;\n}\n\nBecause the wf-loading class is removed once the the fonts have loaded, the visibility:hidden rule will stop being applied, and the text revealed. You can see this in action on this simple example page.\n\nThat works nicely across the board, but the situation is slightly more complicated. WebKit doesn\u2019t wait for all fonts to load before displaying text: it displays text elements as soon as the relevant font is loaded. \n\nTo emulate WebKit more accurately, we need to know when individual fonts have loaded, and apply styles accordingly. Fortunately, as mentioned earlier, the WebFont Loader has events for individual fonts too.\n\nWhen a specific font is loading, a class of the form wf-fontfamilyname-n4-loading is applied. Assuming headings and paragraphs are styled in different fonts, we can make our CSS more specific as follows:\n\n.wf-fontfamilyname-n4-loading h1, \n.wf-anotherfontfamily-n4-loading p {\n\tvisibility:hidden;\n}\n\nNote that the font family name is transformed to lower case, with all spaces removed. The n4 is a shorthand for the weight and style of the font family. In most circumstances you\u2019ll use n4 but refer to the WebFont Loader documentation for exceptions.\n\nYou can see it in action on this Safari example page (you\u2019ll probably need to disable your cache to see any change occur).\n\nMaking all browsers behave like Firefox\n\nTo make WebKit browsers and Internet Explorer work like Firefox and Opera, you need to explicitly show text while the fonts are loading. In order to make this happen, you need to specify a font family which is not a web font while the fonts load, like this:\n\n.wf-fontfamilyname-n4-loading h1 { \n font-family: 'arial narrow', sans-serif; \n}\n.wf-anotherfontfamily-n4-loading p { \n font-family: arial, sans-serif; \n}\n\nYou can see this in action on the Firefox example page (again you\u2019ll probably need to disable your cache to see any change occur).\n\nAnd there\u2019s more\n\nThat\u2019s just the start of what can be done with the WebFont Loader. More areas to explore would be tweaking font sizes to reduce the impact of reflowing text and to better cater for very narrow fonts. By using the JavaScript events much more can be achieved too, such as fading in text as the fonts load.", "year": "2010", "author": "Richard Rutter", "author_slug": "richardrutter", "published": "2010-12-02T00:00:00+00:00", "url": "https://24ways.org/2010/using-the-webfont-loader-to-make-browsers-behave-the-same/", "topic": "code"} {"rowid": 238, "title": "Everything You Wanted To Know About Gradients (And a Few Things You Didn\u2019t)", "contents": "Hello. I am here to discuss CSS3 gradients. Because, let\u2019s face it, what the web really needed was more gradients.\n\nStill, despite their widespread use (or is it overuse?), the smartly applied gradient can be a valuable contributor to a designer\u2019s vocabulary. There\u2019s always been a tension between the inherently two-dimensional nature of our medium, and our desire for more intensity, more depth in our designs. And a gradient can evoke so much: the splay of light across your desk, the slow decrease in volume toward the end of your favorite song, the sunset after a long day. When properly applied, graded colors bring a much needed softness to our work.\n\nOf course, that whole \u2018proper application\u2019 thing is the tricky bit.\n\nBut given their place in our toolkit and their prominence online, it really is heartening to see we can create gradients directly with CSS. They\u2019re part of the draft images module, and implemented in two of the major rendering engines.\n\nStill, I\u2019ve always found CSS gradients to be one of the more confusing aspects of CSS3. So if you\u2019ll indulge me, let\u2019s take a quick look at how to create CSS gradients\u2014hopefully we can make them seem a bit more accessible, and bring a bit more art into the browser.\n\nGradient theory 101 (I hope that\u2019s not really a thing)\n\nRight. So before we dive into the code, let\u2019s cover a few basics. Every gradient, no matter how complex, shares a few common characteristics. Here\u2019s a straightforward one:\n\n I spent seconds hours designing this gradient. I hope you like it.\n\nAt either end of our image, we have a final color value, or color stop: on the left, our stop is white; on the right, black. And more color-rich gradients are no different:\n\n (Don\u2019t ever really do this. Please. I beg you.)\n\nIt\u2019s visually more intricate, sure. But at the heart of it, we have just seven color stops (red, orange, yellow, and so on), making for a fantastic gradient all the way.\n\nNow, color stops alone do not a gradient make. Between each is a transition point, the fail-over point between the two stops. Now, the transition point doesn\u2019t need to fall exactly between stops: it can be brought closer to one stop or the other, influencing the overall shape of the gradient.\n\nA tale of two syntaxes\n\nArmed with our new vocabulary, let\u2019s look at a CSS gradient in the wild. Behold, the simple input button:\n\n\n\nThere\u2019s a simple linear gradient applied vertically across the button, moving from a bright sunflowerish hue (#FAA51A, for you hex nuts in the audience) to a much richer orange (#F47A20). And here\u2019s the CSS that makes it happen:\n\ninput[type=submit] {\n\tbackground-color: #F47A20;\n\tbackground-image: -moz-linear-gradient(\n\t\t#FAA51A,\n\t\t#F47A20\n\t\t);\n\tbackground-image: -webkit-gradient(linear, 0 0, 0 100%,\n\t\tcolor-stop(0, #FAA51A),\n\t\tcolor-stop(1, #F47A20)\n\t\t);\n}\n\nI\u2019ve borrowed David DeSandro\u2019s most excellent formatting suggestions for gradients to make this snippet a bit more legible but, still, the code above might have turned your stomach a bit. And that\u2019s perfectly understandable\u2014heck, it sort of turned mine. But let\u2019s step through the CSS slowly, and see if we can\u2019t make it a little less terrifying.\n\nVerbose WebKit is verbose\n\nHere\u2019s the syntax for our little gradient on WebKit:\n\nbackground-image: -webkit-gradient(linear, 0 0, 0 100%,\n\tcolor-stop(0, #FAA51A),\n\tcolor-stop(1, #F47A20)\n\t);\n\nWoof. Quite a mouthful, no? Well, here\u2019s what we\u2019re looking at:\n\n\n\tWebKit has a single -webkit-gradient property, which can be used to create either linear or radial gradients.\n\tThe next two values are the starting and ending positions for our gradient (0 0 and 0 100%, respectively). Linear gradients are simply drawn along the path between those two points, which allows us to change the direction of our gradient simply by altering its start and end points.\n\tAfterward, we specify our color stops with the oh-so-aptly named color-stop parameter, which takes the stop\u2019s position on the gradient (0 being the beginning, and 100% or 1 being the end) and the color itself.\n\n\nFor a simple two-color gradient like this, -webkit-gradient has a bit of shorthand notation to offer us:\n\nbackground-image: -webkit-gradient(linear, 0 0, 0 100%,\n\tfrom(#FAA51A),\n\tto(#FAA51A)\n\t);\n\nfrom(#FAA51A) is equivalent to writing color-stop(0, #FAA51A), and to(#FAA51A) is the same as color-stop(1, #FAA51A) or color-stop(100%, #FAA51A)\u2014in both cases, we\u2019re simply declaring the first and last color stops in our gradient.\n\nTerse Gecko is terse\n\nWebKit proposed its syntax back in 2008, heavily inspired by the way gradients are drawn in the canvas specification. However, a different, leaner syntax came to the fore, eventually appearing in a draft module specification in CSS3.\n\nNaturally, because nothing on the web was meant to be easy, this is the one that Mozilla has implemented.\n\nHere\u2019s how we get gradient-y in Gecko:\n\nbackground-image: -moz-linear-gradient(\n\t#FAA51A,\n\t#F47A20\n\t);\n\nWait, what? Done already? That\u2019s right. By default, -moz-linear-gradient assumes you\u2019re trying to create a vertical gradient, starting from the top of your element and moving to the bottom. And, if that\u2019s the case, then you simply need to specify your color stops, delimited with a few commas.\n\nI know: that was almost\u2026 painless. But the W3C/Mozilla syntax also affords us a fair amount of flexibility and control, by introducing features as we need them.\n\nWe can specify an origin point for our gradient:\n\nbackground-image: -moz-linear-gradient(50% 100%,\n\t#FAA51A,\n\t#F47A20\n\t);\n\nAs well as an angle, to give it a direction:\n\nbackground-image: -moz-linear-gradient(50% 100%, 45deg,\n\t#FAA51A,\n\t#F47A20\n\t);\n\nAnd we can specify multiple stops, simply by adding to our comma-delimited list:\n\nbackground-image: -moz-linear-gradient(50% 100%, 45deg,\n\t#FAA51A,\n\t#FCC,\n\t#F47A20\n\t);\n\nBy adding a percentage after a given color value, we can determine its position along the gradient path:\n\nbackground-image: -moz-linear-gradient(50% 100%, 45deg,\n\t#FAA51A,\n\t#FCC 20%,\n\t#F47A20\n\t);\n\nSo that\u2019s some of the flexibility implicit in the W3C/Mozilla-style syntax.\n\nNow, I should note that both syntaxes have their respective fans. I will say that the W3C/Mozilla-style syntax makes much more sense to me, and lines up with how I think about creating gradients. But I can totally understand why some might prefer WebKit\u2019s more verbose approach to the, well, looseness behind the -moz syntax. \u00c0 chacun son gradient syntax.\n\nStill, as the language gets refined by the W3C, I really hope some consensus is reached by the browser vendors. And with Opera signaling that it will support the W3C syntax, I suppose it falls on WebKit to do the same.\n\nReusing color stops for fun and profit\n\nBut CSS gradients aren\u2019t all simple colors and shapes and whatnot: by getting inventive with individual color stops, you can create some really complex, compelling effects.\n\nTim Van Damme, whose brain, I believe, should be posthumously donated to science, has a particularly clever application of gradients on The Box, a site dedicated to his occasional podcast series. Now, there are a fair number of gradients applied throughout the UI, but it\u2019s the feature image that really catches the eye.\n\nYou see, there\u2019s nothing that says you can\u2019t reuse color stops. And Tim\u2019s exploited that perfectly.\n\nHe\u2019s created a linear gradient, angled at forty-five degrees from the top left corner of the photo, starting with a fully transparent white (rgba(255, 255, 255, 0)). At the halfway mark, he\u2019s established another color stop at an only slightly more opaque white (rgba(255, 255, 255, 0.1)), making for that incredibly gradual brightening toward the middle of the photo.\n\n\n\nBut then he has set another color stop immediately on top of it, bringing it back down to rgba(255, 255, 255, 0) again. This creates that fantastically hard edge that diagonally bisects the photo, giving the image that subtle gloss.\n\n\n\nAnd his final color stop ends at the same fully transparent white, completing the effect. Hot? I do believe so.\n\nRocking the radials\n\nWe\u2019ve been looking at linear gradients pretty exclusively. But I\u2019d be remiss if I didn\u2019t at least mention radial gradients as a viable option, including a modest one as a link accent on a navigation bar:\n\n\n\nAnd here\u2019s the relevant CSS:\n\nbackground: -moz-radial-gradient(50% 100%, farthest-side,\n\trgb(204, 255, 255) 1%,\n\trgb(85, 85, 85) 15%,\n\trgba(85, 85, 85, 0)\n\t);\nbackground: -webkit-gradient(radial, 50% 100%, 0, 50% 100%, 15,\n\tfrom(rgb(204, 255, 255)),\n\tto(rgba(85, 85, 85, 0))\n\t);\n\nNow, the syntax builds on what we\u2019ve already learned about linear gradients, so much of it might be familiar to you, picking out color stops and transition points, as well as the two syntaxes\u2019 reliance on either a separate property (-moz-radial-gradient) or parameter (-webkit-gradient(radial, \u2026)) to shift into circular mode.\n\nMozilla introduces another stand-alone property (-moz-radial-gradient), and accepts a starting point (50% 100%) from which the circle radiates. There\u2019s also a size constant defined (farthest-side), which determines the reach and shape of our gradient.\n\nWebKit is again the more verbose of the two syntaxes, requiring both starting and ending points (50% 100% in both cases). Each also accepts a radius in pixels, allowing you to control the skew and breadth of the circle.\n\nAgain, this is a fairly modest little radial gradient. Time and article length (and, let\u2019s be honest, your author\u2019s completely inadequate grasp of geometry) prevent me from covering radial gradients in much more detail, because they are incredibly powerful. For those interested in learning more, I can\u2019t recommend the references at Mozilla and Apple strongly enough.\n\nLeave no browser behind\n\nBut no matter the kind of gradients you\u2019re working with, there is a large swathe of browsers that simply don\u2019t support gradients. Thankfully, it\u2019s fairly easy to declare a sensible fallback\u2014it just depends on the kind of fallback you\u2019d like. Essentially, gradient-blind browsers will disregard any properties containing references to either -moz-linear-gradient, -moz-radial-gradient, or -webkit-gradient, so you simply need to keep your fallback isolated from those properties.\n\nFor example: if you\u2019d like to fall back to a flat color, simply declare a separate background-color:\n\n.nav {\n\tbackground-color: #000;\n\tbackground-image: -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));\n\tbackground-image: -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));\n}\n\nOr perhaps just create three separate background properties.\n\n.nav {\n\tbackground: #000;\n\tbackground: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));\n\tbackground: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));\n}\n\nWe can even build on this to fall back to a non-gradient image:\n\n.nav {\n\tbackground: #000 url(\"faux-gradient-lol.png\") repeat-x;\n\tbackground: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));\n\tbackground: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));\n}\n\nNo matter the approach you feel most appropriate to your design, it\u2019s really just a matter of keeping your fallback design quarantined from its CSS3-ified siblings.\n\n(If you\u2019re feeling especially masochistic, there\u2019s even a way to get simple linear gradients working in IE via Microsoft\u2019s proprietary filters. Of course, those come with considerable performance penalties that even Microsoft is quick to point out, so I\u2019d recommend avoiding those.\n\nAnd don\u2019t tell Andy Clarke I told you, or he\u2019ll probably unload his Derringer at me. Or something.)\n\nGo forth and, um, gradientify!\n\nIt\u2019s entirely possible your head\u2019s spinning. Heck, mine is, but that might be the effects of the \u2019nog. But maybe you\u2019re wondering why you should care about CSS gradients. After all, images are here right now, and work just fine. \n\nWell, there are some quick benefits that spring to mind: fewer HTTP requests are needed; CSS3 gradients are easily made scalable, making them ideal for variable widths and heights; and finally, they\u2019re easily modifiable by tweaking a few CSS properties. Because, let\u2019s face it, less time spent yelling at Photoshop is a very, very good thing.\n\nOf course, CSS-generated gradients are not without their drawbacks. The syntax can be confusing, and it\u2019s still under development at the W3C. As we\u2019ve seen, browser support is still very much in flux. And it\u2019s possible that gradients themselves have some real performance drawbacks\u2014so test thoroughly, and gradient carefully.\n\nBut still, as syntaxes converge, and support improves, I think generated gradients can make a compelling tool in our collective belts. The tasteful design is, of course, entirely up to you.\n\nSo have fun, and get gradientin\u2019.", "year": "2010", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2010-12-22T00:00:00+00:00", "url": "https://24ways.org/2010/everything-you-wanted-to-know-about-gradients/", "topic": "code"} {"rowid": 237, "title": "Circles of Confusion", "contents": "Long before I worked on the web, I specialised in training photographers how to use large format, 5\u00d74\u2033 and 10\u00d78\u2033 view cameras \u2013 film cameras with swing and tilt movements, bellows and upside down, back to front images viewed on dim, ground glass screens. It\u2019s been fifteen years since I clicked a shutter on a view camera, but some things have stayed with me from those years.\n\nIn photography, even the best lenses don\u2019t focus light onto a point (infinitely small in size) but onto \u2018spots\u2019 or circles in the \u2018film/image plane\u2019. These circles of light have dimensions, despite being microscopically small. They\u2019re known as \u2018circles of confusion\u2019.\n\nAs circles of light become larger, the more unsharp parts of a photograph appear. On the flip side, when circles are smaller, an image looks sharper and more in focus. This is the basis for photographic depth of field and with that comes the knowledge that no photograph can be perfectly focused, never truly sharp. Instead, photographs can only be \u2018acceptably unsharp\u2019. \n\nAcceptable unsharpness is now a concept that\u2019s relevant to the work we make for the web, because often \u2013 unless we compromise \u2013 websites cannot look or be experienced exactly the same across browsers, devices or platforms. Accepting that fact, and learning to look upon these natural differences as creative opportunities instead of imperfections, can be tough. Deciding which aspects of a design must remain consistent and, therefore, possibly require more time, effort or compromises can be tougher. Circles of confusion can help us, our bosses and our customers make better, more informed decisions.\n\nAcceptable unsharpness\n\nMany clients still demand that every aspect of a design should be \u2018sharp\u2019 \u2013 that every user must see rounded boxes, gradients and shadows \u2013 without regard for the implications. I believe that this stems largely from the fact that they have previously been shown designs \u2013 and asked for sign-off \u2013 using static images.\n\nIt\u2019s also true that in the past, organisations have invested heavily in style guides which, while maybe still useful in offline media, have a strictness that often fails to allow for the flexibility that we need to create experiences that are appropriate to a user\u2019s browser or device capabilities.\n\nWe live in an era where web browsers and devices have wide-ranging capabilities, and websites can rarely look or be experienced exactly the same across them. Is a particular typeface vital to a user\u2019s experience of a brand? How important are gradients or shadows? Are rounded corners really that necessary? These decisions determine how \u2018sharp\u2019 an element should be across browsers with different capabilities and, therefore, how much time, effort or extra code and images we devote to achieving consistency between them. To help our clients make those decisions, we can use circles of confusion.\n\nCircles of confusion\n\nUsing circles of confusion involves plotting aspects of a visual design into a series of concentric circles, starting at the centre with elements that demand the most consistency. Then, work outwards, placing elements in order of their priority so that they become progressively \u2018softer\u2019, more defocused as they\u2019re plotted into outer rings.\n\nIf layout and typography must remain consistent, place them in the centre circle as they\u2019re aspects of a design that must remain \u2018sharp\u2019.\n\nWhen gradients are important \u2013 but not vital \u2013 to a user\u2019s experience of a brand, plot them close to, but not in the centre. This makes everyone aware that to achieve consistency, you\u2019ll need to carve out extra images for browsers that don\u2019t support CSS gradients.\n\nIf achieving rounded corners or shadows in all browsers isn\u2019t important, place them into outer circles, allowing you to save time by not creating images or employing JavaScript workarounds.\n\nI\u2019ve found plotting aspects of a visual design into circles of confusion is a useful technique when explaining the natural differences between browsers to clients. It sets more realistic expectations and creates an environment for more meaningful discussions about progressive and emerging technologies. Best of all, it enables everyone to make better and more informed decisions about design implementation priorities.\n\nInvolving clients allows the implications of the decisions they make more transparent. For me, this has sometimes meant shifting deadlines or it has allowed me to more easily justify an increase in fees. Most important of all, circles of confusion have helped the people that I work with move beyond yesterday\u2019s one-size-fits-all thinking about visual design, towards accepting the rich diversity of today\u2019s web.", "year": "2010", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2010-12-23T00:00:00+00:00", "url": "https://24ways.org/2010/circles-of-confusion/", "topic": "process"}