{"rowid": 314, "title": "Easy Ajax with Prototype", "contents": "There\u2019s little more impressive on the web today than a appropriate touch of Ajax. Used well, Ajax brings a web interface much closer to the experience of a desktop app, and can turn a bear of an task into a pleasurable activity.\n\nBut it\u2019s really hard, right? It involves all the nasty JavaScript that no one ever does often enough to get really good at, and the browser support is patchy, and urgh it\u2019s just so much damn effort. Well, the good news is that \u2013 ta-da \u2013 it doesn\u2019t have to be a headache. But man does it still look impressive. Here\u2019s how to amaze your friends.\n\nIntroducing prototype.js\n\nPrototype is a JavaScript framework by Sam Stephenson designed to help make developing dynamic web apps a whole lot easier. In basic terms, it\u2019s a JavaScript file which you link into your page that then enables you to do cool stuff.\n\nThere\u2019s loads of capability built in, a portion of which covers our beloved Ajax. The whole thing is freely distributable under an MIT-style license, so it\u2019s good to go. What a nice man that Mr Stephenson is \u2013 friends, let us raise a hearty cup of mulled wine to his good name. Cheers! sluurrrrp.\n\nFirst step is to download the latest Prototype and put it somewhere safe. I suggest underneath the Christmas tree.\n\nCutting to the chase\n\nBefore I go on and set up an example of how to use this, let\u2019s just get to the crux. Here\u2019s how Prototype enables you to make a simple Ajax call and dump the results back to the page:\n\nvar url = 'myscript.php';\nvar pars = 'foo=bar';\nvar target = 'output-div';\t\nvar myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars});\n\nThis snippet of JavaScript does a GET to myscript.php, with the parameter foo=bar, and when a result is returned, it places it inside the element with the ID output-div on your page.\n\nKnocking up a basic example\n\nSo to get this show on the road, there are three files we need to set up in our site alongside prototype.js. Obviously we need a basic HTML page with prototype.js linked in. This is the page the user interacts with. Secondly, we need our own JavaScript file for the glue between the interface and the stuff Prototype is doing. Lastly, we need the page (a PHP script in my case) that the Ajax is going to make its call too.\n\nSo, to that basic HTML page for the user to interact with. Here\u2019s one I found whilst out carol singing:\n\n\n\n\n \n Easy Ajax\n \n \n \n\n
\n
\n \n \n \n
\n
\n
\n\n\n\nAs you can see, I\u2019ve linked in prototype.js, and also a file called ajax.js, which is where we\u2019ll be putting our glue. (Careful where you leave your glue, kids.)\n\nOur basic example is just going to take a name and then echo it back in the form of a seasonal greeting. There\u2019s a form with an input field for a name, and crucially a DIV (greeting) for the result of our call. You\u2019ll also notice that the form has a submit button \u2013 this is so that it can function as a regular form when no JavaScript is available. It\u2019s important not to get carried away and forget the basics of accessibility.\n\nMeanwhile, back at the server\n\nSo we need a script at the server which is going to take input from the Ajax call and return some output. This is normally where you\u2019d hook into a database and do whatever transaction you need to before returning a result. To keep this as simple as possible, all this example here will do is take the name the user has given and add it to a greeting message. Not exactly Web 2-point-HoHoHo, but there you have it.\n\nHere\u2019s a quick PHP script \u2013 greeting.php \u2013 that Santa brought me early.\n\nSeason's Greetings, $the_name!

\";\n?>\n\nYou\u2019ll perhaps want to do something a little more complex within your own projects. Just sayin\u2019.\n\nGluing it all together\n\nInside our ajax.js file, we need to hook this all together. We\u2019re going to take advantage of some of the handy listener routines and such that Prototype also makes available. The first task is to attach a listener to set the scene once the window has loaded. He\u2019s how we attach an onload event to the window object and get it to call a function named init():\n\nEvent.observe(window, 'load', init, false);\n\nNow we create our init() function to do our evil bidding. Its first job of the day is to hide the submit button for those with JavaScript enabled. After that, it attaches a listener to watch for the user typing in the name field.\n\nfunction init(){\n $('greeting-submit').style.display = 'none';\n Event.observe('greeting-name', 'keyup', greet, false);\n}\n\nAs you can see, this is going to make a call to a function called greet() onkeyup in the greeting-name field. That function looks like this:\n\nfunction greet(){\n var url = 'greeting.php';\n var pars = 'greeting-name='+escape($F('greeting-name'));\n var target = 'greeting';\n var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars});\n}\n\nThe key points to note here are that any user input needs to be escaped before putting into the parameters so that it\u2019s URL-ready. The target is the ID of the element on the page (a DIV in our case) which will be the recipient of the output from the Ajax call.\n\nThat\u2019s it\n\nNo, seriously. That\u2019s everything. Try the example. Amaze your friends with your 1337 Ajax sk1llz.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-01T00:00:00+00:00", "url": "https://24ways.org/2005/easy-ajax-with-prototype/", "topic": "code"} {"rowid": 315, "title": "Edit-in-Place with Ajax", "contents": "Back on day one we looked at using the Prototype library to take all the hard work out of making a simple Ajax call. While that was fun and all, it didn\u2019t go that far towards implementing something really practical. We dipped our toes in, but haven\u2019t learned to swim yet.\n\nSo here is swimming lesson number one. Anyone who\u2019s used Flickr to publish their photos will be familiar with the edit-in-place system used for quickly amending titles and descriptions on photographs. Hovering over an item turns its background yellow to indicate it is editable. A simple click loads the text into an edit box, right there on the page.\n\n\n\nPrototype includes all sorts of useful methods to help reproduce something like this for our own projects. As well as the simple Ajax GETs we learned how to do last time, we can also do POSTs (which we\u2019ll need here) and a whole bunch of manipulations to the user interface \u2013 all through simple library calls. Here\u2019s what we\u2019re building, so let\u2019s do it.\n\nGetting Started\n\nThere are two major components to this process; the user interface manipulation and the Ajax call itself. Our set-up is much the same as last time (you may wish to read the first article if you\u2019ve not already done so). We have a basic HTML page which links in the prototype.js file and our own editinplace.js. Here\u2019s what Santa dropped down my chimney: \n\n\n \n \n \n Edit-in-Place with Ajax\n \n \n \n \n\n

Edit-in-place

\n

Dashing through the snow on a one horse open sleigh.

\n \n \n\nSo that\u2019s our page. The editable item is going to be the

called desc. The process goes something like this:\n\n\n\tHighlight the area onMouseOver\n\tClear the highlight onMouseOut\n\tIf the user clicks, hide the area and replace with a ';\n\n var button = ' OR \n ';\n\n new Insertion.After(obj, textarea+button);\n\n Event.observe(obj.id+'_save', 'click', function(){saveChanges(obj)}, false);\n Event.observe(obj.id+'_cancel', 'click', function(){cleanUp(obj)}, false);\n\n }\n\nThe first thing to do is to hide the object. Prototype comes to the rescue with Element.hide() (and of course, Element.show() too). Following that, we build up the textarea and buttons as a string, and then use Insertion.After() to place our new editor underneath the (now hidden) editable object.\n\nThe last thing to do before we leave the user to edit is it attach listeners to the Save and Cancel buttons to call either the saveChanges() function, or to cleanUp() after a cancel.\n\nIn the event of a cancel, we can clean up behind ourselves like so:\n\nfunction cleanUp(obj, keepEditable){\n Element.remove(obj.id+'_editor');\n Element.show(obj);\n if (!keepEditable) showAsEditable(obj, true);\n }\n\nSaving the Changes\n\nThis is where all the Ajax fun occurs. Whilst the previous article introduced Ajax.Updater() for simple Ajax calls, in this case we need a little bit more control over what happens once the response is received. For this purpose, Ajax.Request() is perfect. We can use the onSuccess and onFailure parameters to register functions to handle the response.\n\nfunction saveChanges(obj){\n var new_content = escape($F(obj.id+'_edit'));\n\n obj.innerHTML = \"Saving...\";\n cleanUp(obj, true);\n\n var success = function(t){editComplete(t, obj);}\n var failure = function(t){editFailed(t, obj);}\n\n var url = 'edit.php';\n var pars = 'id=' + obj.id + '&content=' + new_content;\n var myAjax = new Ajax.Request(url, {method:'post',\n postBody:pars, onSuccess:success, onFailure:failure});\n }\n\n function editComplete(t, obj){\n obj.innerHTML = t.responseText;\n showAsEditable(obj, true);\n }\n\n function editFailed(t, obj){\n obj.innerHTML = 'Sorry, the update failed.';\n cleanUp(obj);\n }\n\nAs you can see, we first grab in the contents of the textarea into the variable new_content. We then remove the editor, set the content of the original object to \u201cSaving\u2026\u201d to show that an update is occurring, and make the Ajax POST.\n\nIf the Ajax fails, editFailed() sets the contents of the object to \u201cSorry, the update failed.\u201d Admittedly, that\u2019s not a very helpful way to handle the error but I have to limit the scope of this article somewhere. It might be a good idea to stow away the original contents of the object (obj.preUpdate = obj.innerHTML) for later retrieval before setting the content to \u201cSaving\u2026\u201d. No one likes a failure \u2013 especially a messy one.\n\nIf the Ajax call is successful, the server-side script returns the edited content, which we then place back inside the object from editComplete, and tidy up.\n\nMeanwhile, back at the server\n\nThe missing piece of the puzzle is the server-side script for committing the changes to your database. Obviously, any solution I provide here is not going to fit your particular application. For the purposes of getting a functional demo going, here\u2019s what I have in PHP.\n\n\n\nNot exactly rocket science is it? I\u2019m just catching the content item from the POST and echoing it back. For your application to be useful, however, you\u2019ll need to know exactly which record you should be updating. I\u2019m passing in the ID of my

, which is not a fat lot of use. You can modify saveChanges() to post back whatever information your app needs to know in order to process the update.\n\nYou should also check the user\u2019s credentials to make sure they have permission to edit whatever it is they\u2019re editing. Basically the same rules apply as with any script in your application.\n\nLimitations\n\nThere are a few bits and bobs that in an ideal world I would tidy up. The first is the error handling, as I\u2019ve already mentioned. The second is that from an idealistic standpoint, I\u2019d rather not be using innerHTML. However, the reality is that it\u2019s presently the most efficient way of making large changes to the document. If you\u2019re serving as XML, remember that you\u2019ll need to replace these with proper DOM nodes.\n\nIt\u2019s also important to note that it\u2019s quite difficult to make something like this universally accessible. Whenever you start updating large chunks of a document based on user interaction, a lot of non-traditional devices don\u2019t cope well. The benefit of this technique, though, is that if JavaScript is unavailable none of the functionality gets implemented at all \u2013 it fails silently. It is for this reason that this shouldn\u2019t be used as a complete replacement for a traditional, universally accessible edit form. It\u2019s a great time-saver for those with the ability to use it, but it\u2019s no replacement.\n\nSee it in action\n\nI\u2019ve put together an example page using the inert PHP script above. That is to say, your edits aren\u2019t committed to a database, so the example is reset when the page is reloaded.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-23T00:00:00+00:00", "url": "https://24ways.org/2005/edit-in-place-with-ajax/", "topic": "code"} {"rowid": 318, "title": "Auto-Selecting Navigation", "contents": "In the article Centered Tabs with CSS Ethan laid out a tabbed navigation system which can be centred on the page. A frequent requirement for any tab-based navigation is to be able to visually represent the currently selected tab in some way.\n\nIf you\u2019re using a server-side language such as PHP, it\u2019s quite easy to write something like class=\"selected\" into your markup, but it can be even simpler than that.\n\nLet\u2019s take the navigation div from Ethan\u2019s article as an example.\n\n
\n \n
\n\nAs you can see we have a standard unordered list which is then styled with CSS to look like tabs. By giving each tab a class which describes it\u2019s logical section of the site, if we were to then apply a class to the body tag of each page showing the same, we could write a clever CSS selector to highlight the correct tab on any given page. \n\nSound complicated? Well, it\u2019s not a trivial concept, but actually applying it is dead simple.\n\nModifying the markup\n\nFirst thing is to place a class name on each li in the list:\n\n
\n \n
\n\nThen, on each page of your site, apply the a matching class to the body tag to indicate which section of the site that page is in. For example, on your About page:\n\n...\n\nWriting the CSS selector\n\nYou can now write a single CSS rule to match the selected tab on any given page. The logic is that you want to match the \u2018about\u2019 tab on the \u2018about\u2019 page and the \u2018products\u2019 tab on the \u2018products\u2019 page, so the selector looks like this:\n\nbody.home #navigation li.home,\n body.about #navigation li.about,\n body.work #navigation li.work,\n body.products #navigation li.products,\n body.contact #navigation li.contact{\n ... whatever styles you need to show the tab selected ...\n } \n\nSo all you need to do when you create a new page in your site is to apply a class to the body tag to say which section it\u2019s in. The CSS will do the rest for you \u2013 without any server-side help.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-10T00:00:00+00:00", "url": "https://24ways.org/2005/auto-selecting-navigation/", "topic": "code"} {"rowid": 336, "title": "Practical Microformats with hCard", "contents": "You\u2019ve probably heard about microformats over the last few months. You may have even read the easily digestible introduction at Digital Web Magazine, but perhaps you\u2019ve not found time to actually implement much yet. That\u2019s understandable, as it can sometimes be difficult to see exactly what you\u2019re adding by applying a microformat to a page. Sure, you\u2019re semantically enhancing the information you\u2019re marking up, and the Semantic Web is a great idea and all, but what benefit is it right now, today? \n\nWell, the answer to that question is simple: you\u2019re adding lots of information that can be and is being used on the web here and now. The big ongoing battle amongst the big web companies if one of territory over information. Everyone\u2019s grasping for as much data as possible. Some of that information many of us are cautious to give away, but a lot of is happy to be freely available. Of the data you\u2019re giving away, it makes sense to give it as much meaning as possible, thus enabling anyone from your friends and family to the giant search company down the road to make the most of it.\n\nOk, enough of the waffle, let\u2019s get working.\n\nIntroducing hCard\n\nYou may have come across hCard. It\u2019s a microformat for describing contact information (or really address book information) from within your HTML. It\u2019s based on the vCard format, which is the format the contacts/address book program on your computer uses. All the usual fields are available \u2013 name, address, town, website, email, you name it.\n\nIf you\u2019re running Firefox and Greasemonkey (or if you can, just to try this out), install this user script. What it does is look for instances of the hCard microformat in a page, and then add in a link to pass any hCards it finds to a web service which will convert it to a vCard. Take a look at the About the author box at the bottom of this article. It\u2019s a hCard, so you should be able to click the icon the user script inserts and add me to your Outlook contacts or OS X Address Book with just a click.\n\nSo microformats are useful after all. Free microformats all round!\n\nImplementing hCard\n\nThis is the really easy bit. All the hCard microformat is, is a bunch of predefined class names that you apply to the markup you\u2019ve probably already got around your contact information. Let\u2019s take the example of the About the author box from this article. Here\u2019s how the markup looks without hCard:\n\n
\n

About the author

\n

Drew McLellan is a web developer, author and no-good swindler from \n just outside London, England. At the \n Web Standards Project he works \n on press, strategy and tools. Drew keeps a \n personal weblog covering web \n development issues and themes.

\n
\n\nThis is a really simple example because there\u2019s only two key bits of address book information here:- my name and my website address. Let\u2019s push it a little and say that the Web Standards Project is the organisation I work for \u2013 that gives us Name, Company and URL.\n\nTo kick off an hCard, you need a containing object with a class of vcard. The div I already have with a class of bio is perfect for this \u2013 all it needs to do is contain the rest of the contact information.\n\nThe next thing to identify is my name. hCard uses a class of fn (meaning Full Name) to identify a name. As is this case there\u2019s no element surrounding my name, we can just use a span. These changes give us:\n\n
\n

About the author

\n

Drew McLellan is a web developer...\n\nThe two remaining items are my URL and the organisation I belong to. The class names designated for those are url and org respectively. As both of those items are links in this case, I can apply the classes to those links. So here\u2019s the finished hCard.\n\n

\n

About the author

\n

Drew McLellan is a web developer, author and \n no-good swindler from just outside London, England. \n At the Web Standards Project \n he works on press, strategy and tools. Drew keeps a \n personal weblog covering web \n development issues and themes.

\n
\n\nOK, that was easy. By just applying a few easy class names to the HTML I was already publishing, I\u2019ve implemented an hCard that right now anyone with Greasemonkey can click to add to their address book, that Google and Yahoo! and whoever else can index and work out important things like which websites are associated with my name if they so choose (and boy, will they so choose), and in the future who knows what. In terms of effort, practically nil.\n\nWhere next?\n\nSo that was a trivial example, but to be honest it doesn\u2019t really get much more complex even with the most pernickety permutations. Because hCard is based on vCard (a mature and well thought-out standard), it\u2019s all tried and tested. Here\u2019s some good next steps.\n\n\n\tPlay with the hCard Creator\n\tTake a deep breath and read the spec\n\tStart implementing hCard as you go on your own projects \u2013 it takes very little time\n\n\nhCard is just one of an ever-increasing number of microformats. If this tickled your fancy, I suggest subscribing to the microformats site in your RSS reader to keep in touch with new developments.\n\nWhat\u2019s the take-away?\n\nThe take-away is this. They may sound like just more Web 2-point-HoHoHo hype, but microformats are a well thought-out, and easy to implement way of adding greater depth to the information you publish online. They have some nice benefits right away \u2013 certainly at geek-level \u2013 but in the longer term they become much more significant. We\u2019ve been at this long enough to know that the web has a long, long memory and that what you publish today will likely be around for years. But putting the extra depth of meaning into your documents now you can help guard that they\u2019ll continue to be useful in the future, and not just a bunch of flat ASCII.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-06T00:00:00+00:00", "url": "https://24ways.org/2005/practical-microformats-with-hcard/", "topic": "code"} {"rowid": 222, "title": "Golden Spirals", "contents": "As building blocks go, the rectangle is not one to overwhelm the designer with decisions. On the face of it, you have two options: you can set the width, and the height. But despite this apparent simplicity, there are combinations of width and height that can look unbalanced. If a rectangle is too tall and slim, it might appear precarious. If it is not tall enough, it may simply look flat. But like a guitar string that\u2019s out of tune, you can tweak the proportions little by little until a rectangle feels, as Goldilocks said, just right.\n\nA golden rectangle has its height and width in the golden ratio, which is approximately 1:1.618. These proportions have long been recognised as being aesthetically harmonious. Whether through instruction or by intuition, artists have understood how to exploit these proportions over the centuries. Examples can be found in classical architecture, medieval book construction, and even in the recent #newtwitter redesign.\n\nA mathematical curiosity\n\n\n\n\n\n\n\nThe golden rectangle is unique, in that if you remove a square section from it, what is left behind is itself a golden rectangle. The removal of a square can be repeated on the rectangle that is left behind, and then repeated again, as many times as you like. This means that the golden rectangle can be treated as a building block for recursive patterns. In this article, we will exploit this property to build a golden spiral, using only HTML and CSS.\n\nThe markup\n\nThe HTML we\u2019ll use for this study is simply a series of nested
s.\n\n\n\t
\n\t\t
\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t
\n\t\t\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t
\n\t\t\t\t
\n\t\t\t
\n\t\t
\n\t
\n\n\nThe first of these has the class cycle, and so does every fourth ancestor thereafter. The spiral completes a cycle every four steps, so this class allows styles to be reused on
s that appear at the same position in each cycle.\n\nGolden proportions\n\nTo create our spiral we are going to exploit the unique properties of the golden rectangle, so our first priority is to ensure that we have a golden rectangle to begin with. If we pick a length for the short edge \u2013 say, 288 pixels \u2013 we can then calculate the length of the long edge by multiplying this value by 1.618. In this case, 288\u2009\u00d7\u20091.618\u2009=\u2009466, so our starting point will be a
with these properties:\n\n#container > div {\n width: 466px;\n height: 288px;\n}\n\nThe greater than symbol is used here to single out the immediate child of the #container element, without affecting the grandchild or any of the more distant descendants.\n\nWe could go on to specify the precise pixel dimensions of every child element, but that means doing a lot of sums. It would be much easier if we just specified the dimensions for each element as a percentage of the width and height of its parent. This also has the advantage that if you change the size of the outermost container, all nested elements would be resized automatically \u2013 something that we shall exploit later.\n\n\n\n\n\n\n\nThe approximate value of 38.2% can be derived from (100\u2009\u00d7\u20091\u2009\u2212\u2009phi)\u2009\u00f7\u2009phi, where the Greek letter phi (\u03d5) stands for the golden ratio. The value of phi can be expressed as phi\u2009=\u2009(1\u2009+\u2009\u221a5\u2009)\u2009\u00f7\u20092, which is approximately 1.618. You don\u2019t have to understand the derivation to use it. Just remember that if you start with a golden rectangle, you can slice 38.2% from it to create a new golden rectangle.\n\nThis can be expressed in CSS quite simply:\n\n.cycle,\n.cycle > div > div {\n height: 38.2%;\n width: 100%;\n}\n.cycle > div,\n.cycle > div > div > div {\n width: 38.2%;\n height: 100%;\n}\n\nYou can see the result so far by visiting Demo One. With no borders or shading, there is nothing to see yet, so let\u2019s address that next.\n\nShading with transparency\n\nWe\u2019ll need to apply some shading to distinguish each segment of the spiral from its neighbours. We could start with a white background, then progress through shades of grey: #eee, #ddd, #ccc and so on, but this means hard-coding the background-color for every element. A more elegant solution would be to use the same colour for every element, but to make each one slightly transparent.\n\nThe nested
s that we are working with could be compared to layers in Photoshop. By applying a semi-transparent shade of grey, each successive layer can build on top of the darker layers beneath it. The effect accumulates, causing each successive layer to appear slightly darker than the last. In his 2009 article for 24 ways, Drew McLellan showed how to create a semi-transparent effect by working with RGBA colour. Here, we\u2019ll use the colour black with an alpha value of 0.07.\n\n#container div { background-color: rgba(0,0,0,0.07) }\n\nNote that I haven\u2019t used the immediate child selector here, which means that this rule will apply to all
elements inside the #container, no matter how deeply nested they are. You can view the result in Demo Two. As you can see, the golden rectangles alternate between landscape and portrait orientation.\n\n\n\nDemo Three).\n\n\n\nCSS3 specification indicates that a percentage can be used to set the border-radius property, but using percentages does not achieve consistent results in browsers today. Luckily, if you specify a border-radius in pixels using a value that is greater than the width and height of the element, then the resulting curve will use the shorter length side as its radius. This produces exactly the effect that we want, so we\u2019ll use an arbitrarily high value of 10,000 pixels for each border-radius:\n\n.cycle {\n border-radius: 0px;\n border-bottom-left-radius: 10000px;\n}\n.cycle > div {\n border-radius: 0px;\n border-bottom-right-radius: 10000px;\n}\n.cycle > div > div {\n border-radius: 0px;\n border-top-right-radius: 10000px;\n}\n.cycle > div > div > div {\n border-radius: 0px;\n border-top-left-radius: 10000px;\n}\n\nNote that the specification for the border-radius property is still in flux, so it is advisable to use vendor-specific prefixes. I have omitted them from the example above for the sake of clarity, but if you view source on Demo Four then you\u2019ll see that the actual styles are not quite as brief.\n\n\n\n\n\n\n\nFilling the available space\n\nWe have created an approximation of the Golden Spiral using only HTML and CSS. Neat! It\u2019s a shame that it occupies just a fraction of the available space. As a finishing touch, let\u2019s make the golden spiral expand or contract to use the full space available to it.\n\nIdeally, the outermost container should use the full available width or height that could accomodate a rectangle of golden proportions. This behaviour is available for background images using the \u201c background-size: contain; property, but I know of no way to make block level HTML elements behave in this fashion (if I\u2019m missing something, please enlighten me). Where CSS fails to deliver, JavaScript can usually provide a workaround. This snippet requires jQuery:\n\n$(document).ready(function() {\n\tvar phi = (1 + Math.sqrt(5))/2;\n\n\t$(window).resize(function() {\n\t\tvar goldenWidth = windowWidth = $(this).width(),\n\t\t\tgoldenHeight = windowHeight = $(this).height();\n\n\t\tif (windowWidth/windowHeight > phi) {\n\t\t\t// panoramic viewport \u2013 use full height\n\t\t\tgoldenWidth = windowHeight * phi;\n\t\t} else {\n\t\t\t// portrait viewport \u2013 use full width\n\t\t\tgoldenHeight = windowWidth / phi;\n\t\t};\n\n\t$(\"#container > div.cycle\")\n\t\t.width(goldenWidth)\n\t\t.height(goldenHeight);\n\n\t}).resize();\n\n});\n\nYou can view the result by visiting Demo Five.\n\n\n\n\n\n\n\nIs it just me, or can you see an elephant in there?\n\nYou can probably think of many ways to enhance this further, but for this study we\u2019ll leave it there. It has been a good excuse to play with proportions, positioning and the immediate child selector, as well as new CSS3 features such as border-radius and RGBA colours. If you are not already designing with golden proportions, then perhaps this will inspire you to begin.", "year": "2010", "author": "Drew Neil", "author_slug": "drewneil", "published": "2010-12-07T00:00:00+00:00", "url": "https://24ways.org/2010/golden-spirals/", "topic": "design"} {"rowid": 323, "title": "Introducing UDASSS!", "contents": "Okay. What\u2019s that mean?\n\nUnobtrusive Degradable Ajax Style Sheet Switcher!\n\nBoy are you in for treat today \u2018cause we\u2019re gonna have a whole lotta Ajaxifida Unobtrucitosity CSS swappin\u2019 Fun!\n\nOkay are you really kidding? Nope. I\u2019ve even impressed myself on this one. Unfortunately, I don\u2019t have much time to tell you the ins and outs of what I actually did to get this to work. We\u2019re talking JavaScript, CSS, PHP\u2026Ajax. But don\u2019t worry about that. I\u2019ve always believed that a good A.P.I. is an invisible A.P.I\u2026 and this I felt I achieved. The only thing you need to know is how it works and what to do.\n\nA Quick Introduction Anyway\u2026\n\nFirst of all, the idea is very simple. I wanted something just like what Paul Sowden put together in \nAlternative Style: Working With Alternate Style Sheets from Alistapart Magazine EXCEPT a few minor (not-so-minor actually) differences which I\u2019ve listed briefly below:\n\n\n\n\tAllow users to switch styles without JavaScript enabled (degradable)\n\tPreventing the F.O.U.C. before the window \u2018load\u2019 when getting preferred styles\n\tKeep the JavaScript entirely off our markup (no onclick\u2019s or onload\u2019s)\n\tMake it very very easy to implement (ok, Paul did that too)\n\n\nWhat I did to achieve this was used server-side cookies instead of JavaScript cookies. Hence, PHP. However this isn\u2019t a \u201cPHP style switcher\u201d \u2013 which is where Ajax comes in. For the extreme technical folks, no, there is no xml involved here, or even a callback response. I only say Ajax because everyone knows what \u2018it\u2019 means. With that said, it\u2019s the Ajax that sets the cookies \u2018on the fly\u2019. Got it? Awesome!\n\nWhat you need\n\nLuckily, I\u2019ve done the work for you. It\u2019s all packaged up in a nice zip file (at the end\u2026keep reading for now) \u2013 so from here on out, \njust follow these instructions\n\nAs I\u2019ve mentioned, one of the things we\u2019ll be working with is PHP. So, first things first, open up a file called index and save it with a \u2018.php\u2019 extension.\n\nNext, place the following text at the top of your document (even above your DOCTYPE)\n\nadd('css/global.css','screen,projection'); // [Global Styles]\n $styleSheet->add('css/preferred.css','screen,projection','Wog Standard'); // [Preferred Styles]\n $styleSheet->add('css/alternate.css','screen,projection','Tiny Fonts',true); // [Alternate Styles]\n $styleSheet->add('css/alternate2.css','screen,projection','Big O Fonts',true); // // [Alternate Styles]\n $styleSheet->getPreferredStyles();\n ?>\n\nThe way this works is REALLY EASY. Pay attention closely.\n\nNotice in the first line we\u2019ve included our style-switcher.php file.\n\nNext we instantiate a PHP class called AlternateStyles() which will allow us to configure our style sheets. \nSo for kicks, let\u2019s just call our object $styleSheet\n\nAs part of the AlternateStyles object, there lies a public method called add. So naturally with our $styleSheet object, we can call it to (da \u2013 da-da-da!) Add Style Sheets!\n\nHow the add() method works\n\nThe add method takes in a possible four arguments, only one is required. However, you\u2019ll want to add some\u2026 since the whole point is working with alternate style sheets.\n\n$path can simply be a uri, absolute, or relative path to your style sheet. $media adds a media attribute to your style sheets. $title gives a name to your style sheets (via title attribute).$alternate (which shows boolean) simply tells us that these are the alternate style sheets.\n\nadd() Tips\n\nFor all global style sheets (meaning the ones that will always be seen and will not be swapped out), simply use the add method as shown next to // [Global Styles].\n\nTo add preferred styles, do the same, but add a \u2018title\u2019.\n\nTo add the alternate styles, do the same as what we\u2019ve done to add preferred styles, but add the extra boolean and set it to true.\n\nNote following when adding style sheets\n\n\n\tMultiple global style sheets are allowed\n\tYou can only have one preferred style sheet (That\u2019s a browser rule)\n\tFeel free to add as many alternate style sheets as you like\n\n\nMoving on\n\nSimply add the following snippet to the of your web document:\n\n\n \n \n drop();\n ?>\n\nNothing much to explain here. Just use your copy & paste powers.\n\nHow to Switch Styles\n\nWhether you knew it or not, this baby already has the built in \u2018ubobtrusive\u2019 functionality that lets you switch styles by the drop of any link with a class name of \u2018altCss\u2018. Just drop them where ever you like in your document as follows:\n\nBog Standard\n Small Fonts\n Large Fonts\n\nTake special note where the file is linking to. Yep. Just linking right back to the page we\u2019re on. The only extra parameters we pass in is a variable called \u2018css\u2019 \u2013 and within that we append the names of our style sheets.\n\nAlso take very special note on the names of the style sheets have an under_score to take place of any spaces we might have.\n\nGo ahead\u2026 play around and change the style sheet on the example page. Try disabling JavaScript and refreshing your browser. Still works!\n\nCool eh?\n\nWell, I put this together in one night so it\u2019s still a work in progress and very beta. If you\u2019d like to hear more about it and its future development, be sure stop on by my site where I\u2019ll definitely be maintaining it.\n\nDownload the beta anyway\n\nWell this wouldn\u2019t be fun if there was nothing to download. So we\u2019re hooking you up so you don\u2019t go home (or logoff) unhappy\n\n Download U.D.A.S.S.S | V0.8\n\nMerry Christmas!\n\nThanks for listening and I hope U.D.A.S.S.S. has been well worth your time and will bring many years of Ajaxy Style Switchin\u2019 Fun!\n\nMany Blessings, Merry Christmas and have a great new year!", "year": "2005", "author": "Dustin Diaz", "author_slug": "dustindiaz", "published": "2005-12-18T00:00:00+00:00", "url": "https://24ways.org/2005/introducing-udasss/", "topic": "code"} {"rowid": 62, "title": "Being Customer Supportive", "contents": "Every day in customer support is an inbox, a Twitter feed, or a software forum full of new questions. Each is brimming with your customers looking for advice, reassurance, or fixes for their software problems. Each one is an opportunity to take a break from wrestling with your own troublesome tasks and assist someone else in solving theirs.\nSometimes the questions are straightforward and can be answered in a few minutes with a short greeting, a link to a help page, or a prewritten bit of text you use regularly: how to print a receipt, reset a password, or even, sadly, close your account.\nMore often, a support email requires you to spend some time unpacking the question, asking for more information, and writing a detailed personal response, tailored to help that particular user on this particular day.\nHere I offer a few of my own guidelines on how to make today\u2019s email the best support experience for both me and my customer. And even if you don\u2019t consider what you do to be customer support, you might still find the suggestions useful for the next time you need to communicate with a client, to solve a software problem with teammates, or even reach out and ask for help yourself.\n(All the examples appearing in this article are fictional. Any resemblance to quotes from real, software-using persons is entirely coincidental. Except for the bit about Star Wars. That happened.)\nWho\u2019s TAHT girl\nI\u2019ll be honest: I briefly tried making these recommendations into a clever mnemonic like FAST (facial drooping, arm weakness, speech difficulties, time) or PAD (pressure, antiseptic, dressing). But instead, you get TAHT: tone, ask, help, thank. Ah, well.\nAs I work through each message in my support queue, I\n\nlisten to the tone of the email\nask clarifying questions\nbring in extra help as needed\nand thank the customer when the problem is solved.\n\nLet\u2019s open an email and get started!\nLeave your message at the sound of the tone\nWith our enthusiasm for emoji, it can be very hard to infer someone\u2019s tone from plain text. How much time have you spent pondering why your friend responded with \u201cThanks.\u201d instead of \u201cThanks!\u201d? I mean, why didn\u2019t she :grin: or :wink: too?\nOur support customers, however, are often direct about how they\u2019re feeling:\n\nI\u2019m working against a deadline. Need this fixed ASAP!!!!\nThis hasn\u2019t worked in a week and I am getting really frustrated.\nI\u2019ve done this ten times before and it\u2019s always worked. I must be missing something simple.\n\nThey want us to understand the urgency of this from their point of view, just as much as we want to help them in a timely manner. How this information is conveyed gives us an instant sense of whether they are frustrated, angry, or confused\u2014and, just as importantly, how frustrated-angry-confused they are. \nListen to this tone before you start writing your reply. Here are two ways I might open an email:\n\n\u201cI\u2019m sorry that you ran into trouble with this.\u201d\n\u201cSorry you ran into trouble with this!\u201d\n\nThe content is largely the same, but the tone is markedly different. The first version is a serious, staid reaction to the problem the customer is having; the second version is more relaxed, but no less sincere.\nMatching the tone to the sender\u2019s is an important first step. Overusing exclamation points or dropping in too-casual language may further upset someone who is already having a crummy time with your product. But to a cheerful user, a formal reply or an impersonal form response can be off-putting, and damage a good relationship.\nWhen in doubt, I err on the side of being too formal, rather than sending a reply that may be read as flip or insincere. But whichever you choose, matching your correspondent\u2019s tone will make for a more comfortable conversation.\nCatch the ball and throw it back\nOnce you\u2019ve got that tone on lock, it\u2019s time to tackle the question at hand. Let\u2019s see what our customer needs help with today:\n\nI tried everything in the troubleshooting page but I can\u2019t get it to work again. I am on a Mac. Please help.\n\nHmm, not much information here. Now, if I got this short email after helping five other people with the same problem on Mac OS X, I would be sorely tempted to send this customer that common solution in my first reply. I\u2019ve found it\u2019s important to resist the urge to assume this sixth person needs the same answer as the other five, though: there isn\u2019t enough to connect this email to the ones that came before hers. \nInstead, ask a few questions to start. Invest some time to see if there are other symptoms in common, like so:\n\nI\u2019m sorry that you ran into trouble with this! I\u2019ll need a little more information to see what\u2019s happening here.\n[questions]\nThank you for your help.\n\nThose questions are customized for the customer\u2019s issue as much as possible, and can be fairly wide-ranging. They may include asking for log files, getting some screenshots, or simply checking the browser and operating system version she\u2019s using. I\u2019ll ask anything that might make a connection to the previous cases I\u2019ve answered\u2014or, just as importantly, confirm that there isn\u2019t a connection. What\u2019s more, a few well-placed questions may save us both from pursuing the wrong path and building additional frustration. \n(A note on that closing: \u201cThank you for your help\u201d\u2013I often end an email this way when I\u2019ve asked for a significant amount of follow up information. After all, I\u2019m imposing on my customer\u2019s time to run any number of tests. It\u2019s a necessary step, but I feel that thanking them is a nice acknowledgment we\u2019re in this together.)\nHaving said that, though, let\u2019s bring tone back into the mix:\n\nI tried everything in the troubleshooting but I can\u2019t get it to work again. I am on a Mac. I\u2019m working against a deadline. Need this fixed ASAP!!!!\n\nThis customer wants answers now. I\u2019ll still ask for more details, but would consider including the solution to the previous problem in my initial reply as well. (But only if doing so can\u2019t make the situation worse!)\n\nI\u2019m sorry that you ran into trouble with this! I\u2019ll need a little more information to see what\u2019s happening here.\n[questions]\nIf you\u2019d like to try something in the meantime, delete the file named xyz.txt. (If this isn\u2019t the cause of the problem, deleting the file won\u2019t hurt anything.) Here\u2019s how to find that file on your computer:\n[steps]\nLet me know how it goes!\n\nIn the best case, the suggestion works and the customer is on her way. If it doesn\u2019t solve the problem, you will get more information in answer to your questions and can explore other options. And you\u2019ve given the customer an opportunity to be involved in fixing the issue, and some new tools which might come in handy again in the future.\nBring in help\nThe support software I use counts how many emails the customer and I have exchanged, and reports it in a summary line in my inbox. It\u2019s an easy, passive reminder of how long the customer and I have been working together on a problem, especially first thing in the morning when I\u2019m reacquainting myself with my open support cases.\nThree is the smallest number I\u2019ll see there: the customer sends the initial question (1 email); I reply with an answer (2 emails); the customer confirms the problem is solved (3 emails). But the most complicated, stickiest tickets climb into double-digit replies, and anything that stretches beyond a dozen is worthy of a cheer in Slack when we finally get to the root of the problem and get it fixed.\nWhile an extra round of questions and answers will nudge that number higher, it gives me the chance to feel out the technical comfort level of the person I\u2019m helping. If I ask the customer to send some screenshots or log files and he isn\u2019t sure how to do that, I will use that information to adjust my instructions on next steps. I may still ask him to try running a traceroute on his computer, but I\u2019ll break down the steps into a concise, numbered list, and attach screenshots of each step to illustrate it.\nIf the issue at hand is getting complicated, take note if the customer starts to feel out of their depth technically\u2014either because they tell you so directly or because you sense a shift in tone. If that happens, propose bringing some outside help into the conversation:\n\nDo you have a network firewall or do you use any antivirus software? One of those might be blocking a connection that the software needs to work properly; here\u2019s a list of the required connections [link]. If you have an IT department in-house, they should be able to help confirm that none of those are being blocked.\n\nor:\n\nThis error message means you don\u2019t have permission to install the software on your own computer. Is there a systems administrator in the office that may be able to help with this? \n\nFor email-based support cases, I\u2019ll even offer to add someone from their IT department to the thread, so we can discuss the problem together rather than have the customer relay questions and answers back and forth.\nSimilarly, there are occasionally times when my way of describing things doesn\u2019t fit how the customer understands them. Rather than bang our heads against our keyboards, I will ask one of my support colleagues to join the conversation from our side, and see if he can explain things more clearly than I\u2019ve been able to do.\nWe appreciate your business. Please call again\nAnd then, o frabjous day, you get your reward: the reply which says the problem has been solved. \n\nThat worked!! Thank you so much for saving my day!\nI wish I could send you some cookies!\nIf you were here, I would give you my tickets to Star Wars.\n[Reply is an animated gif.]\n\nSometimes the reply is a bit more understated:\n\nThat fixed it. Thanks.\n\nWhether the customer is elated, satisfied, or frankly happy to be done with emailing support, I like to close longer email threads or short, complicated issues with a final thanks and reminder that we\u2019re here to help: \n\nThank you for the update; I\u2019m glad to hear that solved the problem for you! I hope everything goes smoothly for you now, but feel free to email us again if you run into any other questions or problems. Best,\n\nThen mark that support case closed, and move on to the next question. Because even with the most thoughtfully designed software product, there will always be customers with questions for your capable support team to answer.\nTone, ask, help, thank\nSo there you have it: TAHT. Pay attention to tone; ask questions; bring in help; thank your customer.\n(Lack of) catchy mnemonics aside, good customer support is about listening, paying attention, and taking care in your replies. I think it can be summed up beautifully by this quote from Pamela Marie (as tweeted by Chris Coyier):\n\nGolden rule asking a question: imagine trying to answer it \nGolden rule in answering: imagine getting your answer \n\nYou and your teammates are applying a variation of this golden rule in every email you write. You\u2019re the software ambassadors to your customers and clients. You get the brunt of the problems and complaints, but you also get to help fix them. You write the apologies, but you also have the chance to make each person\u2019s experience with your company or product a little bit better for next time.\nI hope that your holidays are merry and bright, and may all your support inboxes be light.", "year": "2015", "author": "Elizabeth Galle", "author_slug": "elizabethgalle", "published": "2015-12-02T00:00:00+00:00", "url": "https://24ways.org/2015/being-customer-supportive/", "topic": "process"} {"rowid": 99, "title": "A Christmas hCard From Me To You", "contents": "So apparently Christmas is coming. And what is Christmas all about? Well, cleaning out your address book, of course! What better time to go through your contacts, making sure everyone\u2019s details are up date and that you\u2019ve deleted all those nasty clients who never paid on time?\n\nIt\u2019s also a good time to make sure your current clients and colleagues have your most up-to-date details, so instead of filling up their inboxes with e-cards, why not send them something useful? Something like a\u2026 vCard! (See what I did there?)\n\nJust in case you\u2019ve been working in a magical toy factory in the upper reaches of Scandinavia for the last few years, I\u2019m going to tell you that now would also be the perfect time to get into microformats. Using the hCard format, we\u2019ll build a very simple web page and markup our contact details in such a way that they\u2019ll be understood by microformats plugins, like Operator or Tails for Firefox, or the cross-browser Microformats Bookmarklet.\n\nOh, and because Christmas is all about dressing up and being silly, we\u2019ll make the whole thing look nice and have a bit of fun with some CSS3 progressive enhancement. \n\nIf you can\u2019t wait to see what we end up with, you can preview it here.\n\n\n\nStep 1: Contact Details\n\nFirst, let\u2019s decide what details we want to put on the page. I\u2019d put my full name, my email address, my phone number, and my postal address, but I\u2019d rather not get surprise visits from strangers when I\u2019m fannying about with my baubles, so I\u2019m going to use Father Christmas instead (that\u2019s Santa to you Yanks).\n\nFather Christmas\nfatherchristmas@elliotjaystocks.com\n25 Laughingallthe Way\nSnow Falls\nLapland\nFinland\n010 60 58 000\n\nStep 2: hCard Creator\n\nNow I\u2019m not sure about you, but I rather like getting the magical robot pixies to do the work for me, so head on over to the hCard Creator and put those pixies to work! Pop in your details and they\u2019ll give you some nice microformatted HTML in turn.\n\n\n\n
\n\tFather Christmas\n\t fatherchristmas@elliotjaystocks.com\n\t
\n\t
25 Laughingallthe Way
\n\tSnow Falls\n\t, \n\tLapland\n\t, \n\tFI-00101\n\tFinland\n
\n
010 60 58 000
\n\t

This hCard created with the hCard creator.

\n
\n\nStep 3: Editing The Code\n\nOne of the great things about microformats is that you can use pretty much whichever HTML tags you want, so just because the hCard Creator Fairies say something should be wrapped in a doesn\u2019t mean you can\u2019t change it to a . Actually, no, don\u2019t do that. That\u2019s not even excusable at Christmas.\n\nI personally have a penchant for marking up each line of an address inside a
  • tag, where the parent url retains the class of adr. As long as you keep the class names the same, you\u2019ll be fine.\n\n
    \n\t

    Father Christmas

    \n\tfatherchristmas@elliotjaystocks.com\n\t
      \n\t\t
    • 25 Laughingallthe Way
    • \n\t\t
    • Snow Falls
    • \n\t\t
    • Lapland
    • \n\t\t
    • FI-00101
    • \n\t\t
    • Finland
    • \n\t
    \n\t010 60 58 000\n
    \n\nStep 4: Testing The Microformats\n\nWith our microformats in place, now would be a good time to test that they\u2019re working before we start making things look pretty. If you\u2019re on Firefox, you can install the Operator or Tails extensions, but if you\u2019re on another browser, just add the Microformats Bookmarklet. Regardless of your choice, the results is the same: if you\u2019ve code microformatted content on a web page, one of these bad boys should pick it up for you and allow you to export the contact info. Give it a try and you should see father Christmas appearing in your address book of choice. Now you\u2019ll never forget where to send those Christmas lists!\n\n\n\nStep 5: Some Extra Markup\n\nOne of the first things we\u2019re going to do is put a photo of Father Christmas on the hCard. We\u2019ll be using CSS to apply a background image to a div, so we\u2019ll be needing an extra div with a class name of \u201cphoto\u201d. In turn, we\u2019ll wrap the text-based elements of our hCard inside a div cunningly called \u201ctext\u201d. Unfortunately, because of the float technique we\u2019ll be using, we\u2019ll have to use one of those nasty float-clearing techniques. I shall call this \u201cchristmas-cheer\u201d, since that is what its presence will inevitably bring, of course.\n\nOh, and let\u2019s add a bit of text to give the page context, too:\n\n

    Send your Christmas lists my way...

    \n
    \n\t
    \n\t\t

    Father Christmas

    \n\t\tfatherchristmas@elliotjaystocks.com\n\t\t
      \n\t\t\t
    • 25 Laughingallthe Way
    • \n\t\t\t
    • Snow Falls
    • \n\t\t\t
    • Lapland
    • \n\t\t\t
    • FI-00101
    • \n\t\t\t
    • Finland
    • \n\t\t
    \n\t\t010 60 58 000\n\t
    \n\t
    \n\t
    \n
    \n
    \n\t

    A tutorial by Elliot Jay Stocks for 24 Ways

    \n\t

    Background: stock.xchng | Father Christmas: iStockPhoto

    \n
    \n\nStep 6: Some Christmas Sparkle\n\nSo far, our hCard-housing web page is slightly less than inspiring, isn\u2019t it? It\u2019s time to add a bit of CSS. There\u2019s nothing particularly radical going on here; just a simple layout, some basic typographic treatment, and the placement of the Father Christmas photo. I\u2019d usually use a more thorough CSS reset like the one found in the YUI or Eric Meyer\u2019s, but for this basic page, the simple * solution will do.\n\nCheck out the step 6 demo to see our basic styles in place.\n\nFrom this\u2026\n\n\n\n\u2026 to this:\n\n\n\nStep 7: Fun With imagery\n\nNow it\u2019s time to introduce a repeating background image to the element. This will seamlessly repeat for as wide as the browser window becomes.\n\nBut that\u2019s fairly straightforward. How about having some fun with the Father Christmas image? If you look at the image file itself, you\u2019ll see that it\u2019s twice as wide as the area we can see and contains a \u2018hidden\u2019 photo of our rather camp St. Nick.\n\n\n\nAs a light-hearted visual\u2026 er\u2026 \u2018treat\u2019 for users who move their mouse over the image, we move the position of the background image on the \u201cphoto\u201d div. Check out the step 7 demo to see it working.\n\nStep 8: Progressive Enhancement\n\nFinally, this fun little project is a great opportunity for us to mess around with some advanced CSS features (some from the CSS3 spec) that we rarely get to use on client projects. (Don\u2019t forget: no Christmas pressies for clients who want you to support IE6!)\n\nHere are the rules we\u2019re using to give some browsers a superior viewing experience:\n\n\n\t@font-face allows us to use Jos Buivenga\u2019s free font \u2018Fertigo Pro\u2019 on all text;\n\ttext-shadow adds a little emphasis on the opening paragraph;\n\tbody > p:first-child causes only the first paragraph to receive this treatment;\n\tborder-radius created rounded corners on our main div and the links within it;\n\tand webkit-transition allows us to gently fade in between the default and hover states of those links.\n\n\nAnd with that, we\u2019re done! You can see the results here. It\u2019s time to customise the page to your liking, upload it to your site, and send out the URL. And do it quickly, because I\u2019m sure you\u2019ve got some last-minute Christmas shopping to finish off!", "year": "2008", "author": "Elliot Jay Stocks", "author_slug": "elliotjaystocks", "published": "2008-12-10T00:00:00+00:00", "url": "https://24ways.org/2008/a-christmas-hcard-from-me-to-you/", "topic": "code"} {"rowid": 145, "title": "The Neverending (Background Image) Story", "contents": "Everyone likes candy for Christmas, and there\u2019s none better than eye candy. Well, that, and just more of the stuff. Today we\u2019re going to combine both of those good points and look at how to create a beautiful background image that goes on and on\u2026 forever!\n\nOf course, each background image is different, so instead of agonising over each and every pixel, I\u2019m going to concentrate on five key steps that you can apply to any of your own repeating background images. In this example, we\u2019ll look at the Miami Beach background image used on the new FOWA site, which I\u2019m afraid is about as un-festive as you can get.\n\n1. Choose your image wisely\n\nI find there are three main criteria when judging photos you\u2019re considering for repetition manipulation (or \u2018repetulation\u2019, as I like to say)\u2026\n\n\n\tsimplicity (beware of complex patterns)\n\tangle and perspective (watch out for shadows and obvious vanishing points)\n\tconsistent elements (for easy cloning)\n\n\nYou might want to check out this annotated version of the image, where I\u2019ve highlighted elements of the photo that led me to choose it as the right one.\n\nThe original image purchased from iStockPhoto.\n\nThe Photoshopped version used on the FOWA site.\n\n2. The power of horizontal lines\n\nWith the image chosen and your cursor poised for some Photoshop magic, the most useful thing you can do is drag out the edge pixels from one side of the image to create a kind of rough colour \u2018template\u2019 on which to work over. It doesn\u2019t matter which side you choose, although you might find it beneficial to use the one with the simplest spread of colour and complex elements.\n\nClick and hold on the marquee tool in the toolbar and select the \u2018single column marquee tool\u2019, which will span the full height of your document but will only be one pixel wide. Make the selection right at the edge of your document, press ctrl-c / cmd-c to copy the selection you made, create a new layer, and hit ctrl-v / cmd-v to paste the selection onto your new layer. using free transform (ctrl-t / cmd-t), drag out your selection so that it becomes as wide as your entire canvas. \n\nA one-pixel-wide selection stretched out to the entire width of the canvas.\n\n3. Cloning\n\nIt goes without saying that the trusty clone tool is one of the most important in the process of creating a seamlessly repeating background image, but I think it\u2019s important to be fairly loose with it. Always clone on to a new layer so that you\u2019ve got the freedom to move it around, but above all else, use the eraser tool to tweak your cloned areas: let that handle the precision stuff and you won\u2019t have to worry about getting your clones right first time.\n\nIn the example below, you can see how I overcame the problem of the far-left tree shadow being chopped off by cloning the shadow from the tree on its right. \n\nThe edge of the shadow is cut off and needs to be \u2018made\u2019 from a pre-existing element.\n\nThe successful clone completes the missing shadow.\n\nThe two elements are obviously very similar but it doesn\u2019t look like a clone because the majority of the shape is \u2018genuine\u2019 and only a small part is a duplicate. Also, after cloning I transformed the duplicate, erased parts of it, used gradients, and \u2014 ooh, did someone mention gradients?\n\n4. Never underestimate a gradient\n\nFor this image, I used gradients in a similar way to a brush: covering large parts of the canvas with a colour that faded out to a desired point, before erasing certain parts for accuracy.\n\nSeveral of the gradients and brushes that make up the \u2018customised\u2019 part of the image, visible when the main photograph layer is hidden.\n\nThe full composite.\n\nGradients are also a bit of an easy fix: you can use a gradient on one side of the image, flip it horizontally, and then use it again on the opposite side to make a more seamless join.\n\nSpeaking of which\u2026\n\n5. Sewing the seams\n\nNo matter what kind of magic Photoshop dust you sprinkle over your image, there will still always be the area where the two edges meet: that scary \u2018loop\u2019 point. Fret ye not, however, for there\u2019s help at hand in the form of a nice little cheat. Even though the loop point might still be apparent, we can help hide it by doing something to throw viewers off the scent.\n\nThe seam is usually easy to spot because it\u2019s a blank area with not much detail or colour variation, so in order to disguise it, go against the rule: put something across it!\n\nThis isn\u2019t quite as challenging as it may sound, because if we intentionally make our own \u2018object\u2019 to span the join, we can accurately measure the exact halfway point where we need to split it across the two sides of the image. This is exactly what I did with the FOWA background image: I made some clouds!\n\nA sky with no clouds in an unhappy one.\n\nA simple soft white brush creates a cloud-like formation in the sky.\n\nAfter taking the cloud\u2019s opacity down to 20%, I used free transform to highlight the boundaries of the layer. I then moved it over to the right, so that the middle of the layer perfectly aligned with the right side of the canvas.\n\nFinally, I duplicated the layer and did the same in reverse: dragging the layer over to the left and making sure that the middle of the duplicate layer perfectly aligned with the left side of the canvas.\n\nAnd there you have it! Boom! Ta-da! Et Voila! To see the repeating background image in action, visit futureofwebapps.com on a large widescreen monitor or see a simulation of the effect.\n\nThanks for reading, folks. Have a great Christmas!", "year": "2007", "author": "Elliot Jay Stocks", "author_slug": "elliotjaystocks", "published": "2007-12-03T00:00:00+00:00", "url": "https://24ways.org/2007/the-neverending-background-image-story/", "topic": "code"} {"rowid": 170, "title": "A Pet Project is For Life, Not Just for Christmas", "contents": "I\u2019m excited: as December rolls on, I\u2019m winding down from client work and indulging in a big pet project I\u2019ve been dreaming up for quite some time, with the aim of releasing it early next year. I\u2019ve always been a bit of a sucker for pet projects and currently have a few in the works: the big one, two collaborations with friends, and my continuing (and completely un-web-related) attempt at music. But when I think about the other designers and developers out there whose work I admire, one thing becomes obvious: they\u2019ve all got pet projects! Look around the web and you\u2019ll see that anyone worth their salt has some sort of side project on the go. If you don\u2019t have yours yet, now\u2019s the time!\n\nHave a pet project to collaborate with your friends\n\nIt\u2019s not uncommon to find me staring at my screen, looking at beautiful websites my friends have made, grinning inanely because I feel so honoured to know such talented individuals. But one thing really frustrates me: I hardly ever get to work with these people! Sure, there are times when it\u2019s possible to do so, but due to various project situations, it\u2019s a rarity.\n\nSo, in order to work with my friends, I\u2019ve found the best way is to instigate the collaboration outside of client work; in other words, have a pet project together! Free from the hard realities of budgets, time restraints, and client demands, you and your friends can come up with something purely for your own pleasures. If you\u2019ve been looking for an excuse to work with other designers or developers whose work you love, the pet project is that excuse. They don\u2019t necessarily have to be friends, either: if the respect is mutual, it can be a great way of breaking the ice and getting to know someone. \n\n Figure 1: A forthcoming secret love-child from myself and Tim Van Damme\n\nHave a pet project to escape from your day job\n\nWe all like to moan about our clients and bosses, don\u2019t we? But if leaving your job or firing your evil client just isn\u2019t an option, why not escape from all that and pour your creative energies into something you genuinely enjoy? \n\nIt\u2019s not just about reacting to negativity, either: a pet project is a great way to give yourself a bit of variety. As web designers, our day-to-day work forces us to work within a set of web-related contraints and sometimes it can be demoralising to spend so many hours fixing IE bugs. The perfect antidote? Go and do some print design! If it\u2019s not possible in your day job or client work, the pet project is the perfect place to exercise your other creative muscles. Yes, print design (or your chosen alternative) has its own constraints, but if they\u2019re different to those you experience on a daily basis, it\u2019ll be a welcome relief and you\u2019ll return to your regular work feeling refreshed.\n\n Figure 2: Ligature, Loop & Stem, from Scott Boms & Luke Dorny\n\nHave a pet project to fulfill your own needs\n\nMany pet projects come into being because the designers and/or developers behind them are looking for a tool to accomplish a task and find that it doesn\u2019t exist, thus prompting them to create their own solution. In fact, the very app I\u2019m using to write this article \u2014 Ommwriter, from Herraiz Soto & Co \u2014 was originally a tool they\u2019d created for their internal staff, before releasing it to the public so that it could be enjoyed by others.\n\nJust last week, Tina Roth Eisenberg launched Teux Deux, a pet project she\u2019d designed to meet her own requirements for a to-do list, having found that no existing apps fulfilled her needs. Oh, and it was a collaboration with her studio mate Cameron. Remember what I was saying about working with your friends?\n\n Figure 3: Teux Deux, the GTD pet project that launched just last week\n\nHave a pet project to help people out\n\nOmmwriter and Teux Deux are free for anyone to use. Let\u2019s just think about that for a moment: the creators have invested their time and effort in the project, and then given it away to be used by others. That\u2019s very cool and something we\u2019re used to seeing a lot of in the web community (how lucky we are)! People love free stuff and giving away the fruits of your labour will earn you major kudos. Of course, there\u2019s nothing wrong with making some money, either \u2014 more on that in a second.\n\n Figure 4: Dan Rubin\u2018s extremely helpful Make Photoshop Faster\n\nHave a pet project to raise your profile\n\nSo, giving away free stuff earns you kudos. And kudos usually helps you raise your profile in the industry. We all like a bit of shameless fame, don\u2019t we? But seriously, if you want to become well known, make something cool. It could be free (to buy you the love and respect of the community) or it could be purchasable (if you\u2019ve made something that\u2019s cool enough to deserve hard-earned cash), but ultimately it needs to be something that people will love. \n\n Figure 5: Type designer Jos Buivenga has shot to fame thanks to his beautiful typefaces and \u2018freemium\u2019 business model\n\nIf you\u2019re a developer with no design skills, team up with a good designer so that the design community appreciate its aesthetic. If you\u2019re a designer with no development skills, team up with a good developer so that it works. Oh, and not that I\u2019d recommend you ever do this for selfish reasons, but collaborating with someone you admire \u2014 whose work is well-respected by the community \u2014 will also help raise your profile.\n\nHave a pet project to make money\n\nIn spite of our best hippy-esque intentions to give away free stuff to the masses, there\u2019s also nothing wrong with making a bit of money from your pet project. In fact, if your project involves you having to make a considerable financial investment, it\u2019s probably a good idea to try and recoup those costs in some way.\n\n Figure 6: The success of Shaun Inman\u2018s various pet projects \u2014 Mint, Fever, Horror Vacui, etc. \u2014 have allowed him to give up client work entirely.\n\nA very common way to do that in both the online and offline worlds is to get some sort of advertising. For a slightly different approach, try contacting a company who are relevant to your audience and ask them if they\u2019d be interesting in sponsoring your project, which would usually just mean having their brand associated with yours in some way. This is still a form of advertising but tends to allow for a more tasteful implementation, so it\u2019s worth pursuing. \n\nAdvertising is a great way to cover your own costs and keep things free for your audience, but when costs are considerably higher (like if you\u2019re producing a magazine with high production values, for instance), there\u2019s nothing wrong with charging people for your product. But, as I mentioned above, you\u2019ve got to be positive that it\u2019s worth paying for!\n\nHave a pet project just for fun\n\nSometimes there\u2019s a very good reason for having a pet project \u2014 and sometimes even a viable business reason \u2014 but actually you don\u2019t need any reason at all. Wanting to have fun is just as worthy a motivation, and if you\u2019re not going to have fun doing it, then what\u2019s the point? Assuming that almost all pet projects are designed, developed, written, printed, marketed and supported in our free time, why not do something enjoyable?\n\n Figure 7: Jessica Hische\u2018s beautiful Daily Drop Cap\n\nIn conclusion\n\nThe fact that you\u2019re reading 24 ways shows that you have a passion for the web, and that\u2019s something I\u2019m happy to see in abundance throughout our community. Passion is a term that\u2019s thrown about all over the place, but it really is evident in the work that people do. It\u2019s perhaps most evident, however, in the pet projects that people create. Don\u2019t forget that the very site you\u2019re reading this article on is\u2026 a pet project.\n\nIf you\u2019ve yet to do so, make it a new year\u2019s resolution for 2010 to have your own pet project so that you can collaborate with your friends, escape from your day job, fulfil your own needs, help people out, raise your profile, make money, and \u2014 above all \u2014 have fun.", "year": "2009", "author": "Elliot Jay Stocks", "author_slug": "elliotjaystocks", "published": "2009-12-18T00:00:00+00:00", "url": "https://24ways.org/2009/a-pet-project-is-for-life-not-just-for-christmas/", "topic": "business"} {"rowid": 270, "title": "From Side Project to Not So Side Project", "contents": "In the last article I wrote for 24 ways, back in 2009, I enthused about the benefits of having a pet project, suggesting that we should all have at least one so that we could collaborate with our friends, escape our day jobs, fulfil our own needs, help others out, raise our profiles, make money, and \u2014 most importantly \u2014 have fun. I don\u2019t think I need to offer any further persuasions: it seems that designers and developers are launching their own pet projects left, right and centre. This makes me very happy.\n\nHowever, there still seems to be something of a disconnect between having a side project and turning it into something that is moderately successful; in particular, the challenge of making enough money to sustain the project and perhaps even elevating it from the sidelines so that it becomes something not so on the side at all.\n\nBefore we even begin this, let\u2019s spend a moment talking about money, also known as\u2026\n\nEvil, nasty, filthy money\n\nOver the last couple of years, I\u2019ve started referring to myself as an accidental businessman. I say accidental because my view of the typical businessman is someone who is driven by money, and I usually can\u2019t stand such people. Those who are motivated by profit, obsessed with growth, and take an active interest in the world\u2019s financial systems don\u2019t tend to be folks with whom I share a beer, unless it\u2019s to pour it over them. Especially if they\u2019re wearing pinstriped suits.\n\nThat said, we all want to make money, don\u2019t we? And most of us want to make a relatively decent amount, too. I don\u2019t think there\u2019s any harm in admitting that, is there? Hello, I\u2019m Elliot and I\u2019m a capitalist.\n\nThe key is making money from doing what we love. For most people I know in our community, we\u2019ve already achieved that \u2014 I\u2019m hard-pressed to think of anyone who isn\u2019t extremely passionate about working in our industry and I think it\u2019s one of the most positive, unifying benefits we enjoy as a group of like-minded people \u2014 but side projects usually arise from another kind of passion: a passion for something other than what we do as our day jobs. Perhaps it\u2019s because your clients are driving you mental and you need a break; perhaps it\u2019s because you want to create something that is truly your own; perhaps it\u2019s because you\u2019re sick of seeing your online work disappear so fast and you want to try your hand at print in order to make a more permanent mark.\n\nThe three factors I listed there led me to create 8 Faces, a printed magazine about typography that started as a side project and is now a very significant part of my yearly output and income.\n\nLike many things that prove fruitful, 8 Faces\u2019 success was something of an accident, too. For a start, the magazine was never meant to be profitable; its only purpose at all was to scratch my own itch. Then, after the first issue took off and I realized how much time I needed to spend in order to make the next one decent, it became clear that I would have to cover more than just the production costs: I\u2019d have to take time out from client work as well. Doing this meant I\u2019d have to earn some money. Probably not enough to equate to the exact amount of time lost when I could be doing client work (not that you could ever describe time as being lost when you work on something you love), but enough to survive; for me to feel that I was getting paid while doing all of the work that 8 Faces entailed. The answer was to raise money through partnerships with some cool companies who were happy to be associated with my little project.\n\nA sustainable business model\n\nBusiness model! I can\u2019t believe I just wrote those words! But a business model is really just a loose plan for how not to screw up. And all that stuff I wrote in the paragraph above about partnering with companies so I could get some money in while I put the magazine together? Well, that\u2019s my business model. \n\nIf you\u2019re making any product that has some sort of production cost, whether that\u2019s physical print run expenses or up-front dev work to get an app built, covering those costs before you even release your product means that you\u2019ll be in profit from the first copy you sell. This is no small point: production expenses are pretty much the only cost you\u2019ll ever need to recoup, so having them covered before you launch anything is pretty much the best possible position in which you could place yourself. Happy days, as Jamie Oliver would say.\n\nObtaining these initial funds through partnerships has another benefit. Sure, it\u2019s a form of advertising but, done right, your partners can potentially provide you with great content, too. In the case of 8 Faces, the ads look as nice as the rest of the magazine, and a couple of our partners also provide proper articles: genuinely meaningful, relevant, reader-pleasing articles at that. You\u2019d be amazed at how many companies are willing to become partners and, as the old adage goes, if you don\u2019t ask, you don\u2019t get.\n\nWith profit comes responsibility\n\nDon\u2019t forget about the responsibility you have to your audience if you engage in a relationship with a partner or any type of advertiser: although I may have freely admitted my capitalist leanings, I\u2019m still essentially a hairy hippy, and I feel that any partnership should be good for me as a publisher, good for the partner and \u2014 most importantly \u2014 good for the reader. Really, the key word here is relevance, and that\u2019s where 99.9% of advertising fails abysmally. \n\n(99.9% is not a scientific figure, but you know what I\u2019m on about.)\n\nThe main grey area when a side project becomes profitable is how you share that profit, partly because \u2014 in my opinion, at least \u2014 the transition from non-profitable side project to relatively successful source of income can be a little blurred. Asking for help for nothing when there\u2019s no money to be had is pretty normal, but sometimes it\u2019s easy to get used to that free help even once you start making money. I believe the best approach is to ask for help with the promise that it will always be rewarded as soon as there\u2019s money available. (Oh, god: this sounds like one of those nightmarish client proposals. It\u2019s not, honest.) If you\u2019re making something cool, people won\u2019t mind helping out while you find your feet.\n\nEvents often think that they\u2019re exempt from sharing profit. Perhaps that\u2019s because many event organizers think they\u2019re doing the speakers a favour rather than the other way around (that\u2019s a whole separate article), but it\u2019s shocking to see how many people seem to think they can profit from content-makers \u2014 speakers, for example \u2014 and yet not pay for that content. It was for this reason that Keir and I paid all of our speakers for our Insites: The Tour side project, which we ran back in July. We probably could\u2019ve got away without paying them, especially as the gig was so informal, but it was the right thing to do.\n\nIn conclusion: money as a by-product\n\nLet\u2019s conclude by returning to the slightly problematic nature of money, because it\u2019s the pivot on which your side project\u2019s success can swing, regardless of whether you measure success by monetary gain. I would argue that success has nothing to do with profit \u2014 it\u2019s about you being able to spend the time you want on the project. Unfortunately, that is almost always linked to money: money to pay yourself while you work on your dream idea; money to pay for more servers when your web app hits the big time; money to pay for efforts to get the word out there. The key, then, is to judge success on your own terms, and seek to generate as much money as you see fit, whether it\u2019s purely to cover your running costs, or enough to buy a small country. There\u2019s nothing wrong with profit, as long as you\u2019re ethical about it. (Pro tip: if you\u2019ve earned enough to buy a small country, you\u2019ve probably been unethical along the way.)\n\nThe point at which individuals and companies fail \u2014 in the moral sense, for sure, but often in the competitive sense, too \u2014 is when money is the primary motivation. It should never be the primary motivation. If you\u2019re not passionate enough about something to do it as an unprofitable side project, you shouldn\u2019t be doing it all. \n\nEarning money should be a by-product of doing what you love. And who doesn\u2019t want to spend their life doing what they love?", "year": "2011", "author": "Elliot Jay Stocks", "author_slug": "elliotjaystocks", "published": "2011-12-22T00:00:00+00:00", "url": "https://24ways.org/2011/from-side-project-to-not-so-side-project/", "topic": "business"} {"rowid": 17, "title": "Bringing Design and Research Closer Together", "contents": "The \u2018should designers be able to code\u2019 debate has raged for some time, but I\u2019m interested in another debate: should designers be able to research? \n\nAre you a designer who can do research? Good research and the insights you uncover inspire fresh ways of thinking and get your creative juices flowing. Good research brings clarity to a woolly brief. Audience insight helps sharpen your focus on what\u2019s really important. Experimentation through research and design brings a sense of playfulness and curiosity to your work. Good research helps you do good design.\n\nBeing a web designer today is pretty tough, particularly if you\u2019re a freelancer and work on your own. There are so many new ideas, approaches to workflow and trends and tools to keep up with. How do you decide which things to do and which to ignore? A modern web designer needs to be able to consider the needs of the audience, design appropriate IAs and layouts, choose colour palettes, pick appropriate typefaces and type layouts, wrangle with content, style, code, dabble in SEO, and the list goes on and on. Not only that, but today\u2019s web designer also has to keep up with the latest talking points in the industry: responsive design, Agile, accessibility, Sass, Git, lean UX, content first, mobile first, blah blah blah. Any good web designer doesn\u2019t need to be persuaded about the merits of including research in their toolkit, but do you really have time to include research too? \n\nWho is responsible for research?\n\nGenerally, research in the web industry forms part of other disciplines and isn\u2019t so much a discipline in its own right. It\u2019s very often thought of as part of UX, or activities that make up a process such as IA or content strategy. Research is often undertaken by UX designers, information architects or content strategists and isn\u2019t something designers or developers get that involved in. Some people lump all of these activities together and label it design research and have design researchers to do it. Some companies, such as the one I run with my husband Mark, are lucky enough to have someone with specialist research knowledge (yup, that would be me folks) who can lead all or most of the research work undertaken by the company. See also Mule Design, GOV.UK, the BBC, Mailchimp, Facebook and Twitter. \n\nWhat if you\u2019re not lucky enough to have your own researcher or team of researchers? Often research is the kind of thing that\u2019s nice to have, or it can be cut from scope when doing the budget dance with a client. It often forms part of the discovery phase of a project and sometimes just becomes a tick-box exercise. But research isn\u2019t just user testing and it shouldn\u2019t just live in a report on Basecamp that no one reads. I would argue that research and experimentation is a way of working or an approach to how you design. Research can be used during the whole design process and must be a vital part of a designer\u2019s workflow on every project. Even if you work in a small studio, you can still create a culture of audience insight. Even if you work on your own, you can still absorb yourself in as much audience data as you can throughout the project life cycle. Here\u2019s how.\n\nResearch is everyone\u2019s job\n\nThere is a subtle difference between writing a research report and delivering it to a client, and them actually using it and applying the insights to their thought process. In my experience of working in the audiences team at the BBC, research was most effective when the role was embedded in the production team and insights were used as part of the editorial process.\n\nIn this section I\u2019ll talk through some common problems you might encounter in a typical project life cycle and show you ways you can use research to help you. For the sake of this article, let\u2019s imagine that we\u2019re talking about a particular project here and not ongoing product development. The same principles can of course be applied then, but even if you work in-house rather than on the agency side, you\u2019re probably used to working on distinct projects or phases of work.\n\n1. Problem: I want to come up with a new product idea. \n\nSolution: Inspiration through insights.\n\nBefore you begin a new project, a good way of quickly absorbing all the existing knowledge that there maybe about a theme, product type or website is to literally surround yourself with it. This is especially relevant for new ideas or product development. Create an incident room if you can: fill the walls of your meeting room, the walls near your desk, or even just use a pinboard or online pinboard if space is tight or you\u2019re working with a dispersed team. The same process can be used throughout a project\u2019s or product life cycle \u2014 read about how MailChimp has applied this idea. \n\nLet\u2019s take a new product idea as an example. Say you wanted to develop a responsive tool for web designers but you weren\u2019t sure what aspect of responsive design to focus on. First of all, you should pose a hypothesis or problem statement to gather ideas around. For example: \u201cHow to speed up a designer\u2019s responsive workflow.\u201d You would then need to gather insights around this topic. You could run some interviews with freelance designers about how they work responsively. You could shadow a development team for the day to understand their processes. You could observe conversations on Twitter or IRC or wherever your target audience interact to see what people talk about. You could search out industry data and articles currently available.\n\nThe next stage is to comb through this data and extract insights from it. You can use good old Post-it notes and a sharpie: capture one insight or thought per Post-it. If one insight leads into another, use two Post-its. The objective is volume. Try to ensure clarity in each Post-it so you don\u2019t have to go back and reference material again (maybe you could use a key if you think it\u2019ll get confusing).\n\n\n\nAfter this, stick them all up and synthesise the same way you would for any kind of cluster or affinity sort. Organise into broad themes. These themes then become springboards for further exploration and idea generation. You might see a gap or opportunity in one particular area, both from a workflow perspective but also from a business perspective. Bingo. Your insights then become the fuel for ideas generation.\n\nThis method doesn\u2019t just have to be used for new products \u2014 it works particularly well in a discovery phase for new projects or for new features in an existing product. We\u2019re doing something similar for our own responsive tool, Gridset at the moment.\n\nResources:\n\n\n\tSticky Wisdom by Dave Allan, Matt Kingdon, Kris Murrin, Daz Rudkin\n\tThe Science of Serendipity by Matt Kingdon\n\tThe Art of Innovation by Tom Kelley\n\n\n2. Problem: You\u2019re starting a new project and need to know the basics before you get headlong into designing or building. \n\nSolution: Quantitative survey.\n\nCommon questions might be:\n\n\n\tWho are the users?\n\tHow many are there?\n\tWhat are they like?\n\tWhy do they use the site?\n\tWhat do they need from the site?\n\tWhat are their goals?\n\n\nPrint out and stick up what you already know and have in your project space or \u2018incident room\u2019: any reports you have found or been given, analytics graphs, personas, pen portraits, as well as screengrabs of the current website, product or branding. Spend time looking through it all and identify the gaps. \n\nIf you have very little existing audience data, a quick and easy way to get some baseline information is to run a quick user survey on a current website. You can establish basic demographic information, appreciation and views of the website as it stands, as well as delve a little deeper into needs and wants. This is also vital if you want some kind of trackable measures to go back to once you have designed and built your shiny new website for your client \u2014 read more in my article for 24 ways last year.)\n\n\n\nWe use surveys a lot at Mark Boulton Design for our client work. Here\u2019s a screen grab of one we ran in March on http://info.cern.ch before we redesigned the site and did the work on the First Website Project. We repeated the survey after the new website went live and were able to compare the results. Both surveys were a great source of insight to the project team as well as for the project stakeholders who needed to pitch the idea of the hack days and fundraise for them.\n\n\n\nOnce you\u2019ve run your survey, you should always write up a short summary for yourself and your client to refer to. If you\u2019re not a trained researcher, you should try to read up on analysis techniques or data visualisation. It can be easy to misinterpret data and make it bend to the story you are trying to tell. You should be looking for the story in the data and present it without bias. \n\nIf you\u2019re using the \u2018incident room\u2019 method I mentioned earlier on, you can also extract the insights onto post it notes and add them to your growing body of knowledge.\n\nResources: \n\n\n\tUsing Questionnaires for Design Research by Emma Boulton\n\tData-driven Design with an Annual Survey by Aarron Walter\n\tResearch Methods for Product Design by Alex Milton and Paul Rodgers\n\tA Practical Guide to Designing with Data by Brian Suda\n\n\n3. Problem: You have a prototype of a new design and you need some feedback from real users. \n\nSolution: User interviews and task based testing.\n\nInterviewing is a staple research method that every designer should master as it can be used throughout a project life cycle. Erika Hall recently wrote a great article on the basics for A List Apart. From stakeholder interviews in a discovery phase, to initial user research, right through to task based testing and iteration, interviews can be enormously helpful. They are very time-consuming, however, and although speaking to someone is better than speaking to no one, it\u2019s always better to plan to do a few interviews at once, rather than one or two. I generally find that patterns only start to emerge after I\u2019ve spoken to 4 or 5 people. Interviews are another thing we do a lot of at Mark Boulton Design. Most of the interviews we do are remote due to the location of our clients and their users. \n\n\n\nRigour is an important consideration in all research activities and especially if you\u2019re a non-researcher. Interviews particularly can be easily skewed by an inexperienced facilitator, which is why pairing can be a good approach. Building rapport, questioning, time keeping, note taking and thinking on your feet can be difficult to do all at once, so having a colleague take notes while you concentrate on leading the conversation can work really well. It\u2019s important for the note taker to sit in on more than one interview so that they get a more rounded view of the feedback. The same person should also be involved in the analysis of the data. \n\n\n\nInterviews can be analysed and written up in a report or summary as with other types of research. I often use the same kind of collaborative process detailed earlier for deciding on themes, particularly if multiple members of the team have been involved in interviewing. \n\nInterviews are particularly useful for our incident room and can provide much colour and insight to an exploratory process. I often find verbatim quotes to be the most insightful type of data. You might find that an inexperienced researcher (or designer who is used to solving problems) will jump to interpretation too soon and forget to just listen to what the interviewee is saying. Capturing the exact form of words a person uses can help get away from this.\n\nResources: \n\n\n\tInterviewing Humans by Erika Hall\n\tA Pocket Guide to Interviewing for Research by Andrew Travers\n\tInterviewing Users by Steve Portigal\n\n\n4. Problem: How successful have I been with this new design? \n\nSolution: Key performance indicators\n\nOnce your new design has been realised, it\u2019s important to evaluate it. What works, what doesn\u2019t work so well? As well as a straightforward design crit, don\u2019t forget to introduce audience insights into a review meeting or project wash up. \n\n\n\nWork out what your KPIs \u2014 your key performance indicators \u2014 will be beforehand and then you can start to track them over time. For example, number of visits, appreciation of the site, willingness to recommend the site to a friend, number of sales, and number of conversions are all sensible measures to track. Interviews can again be helpful but cold, hard numbers are often better here. Read Corey Vilhauer\u2019s take on this on A List Apart.\n\nConsistency is key here. If you have looked at your analytics and done a survey beforehand, you will have a baseline to start from. Don\u2019t keep changing your measures and questions, or your data will not be comparable. Pick a few key questions or a set of measures, create a survey and then run it once a month, once a quarter, every six months or annually. You\u2019ll start to see changes over time as the design beds in. You may see seasonal trends and spot patterns in the data related to other activities like marketing, promotion and so on. Keeping a record of all of this will increase your understanding of your audience. We\u2019ve created a satisfaction survey for Gridset with a number of measures that we track on an ongoing basis. MailChimp has also created an annual survey with the aim of tracking their audience measures over time\n\nResources:\n\n\n\tSearch Analytics by Louis Rosenfeld\n\tA Primer on A/B Testing by Lara Swanson\n\tLean UX by Jeff Gothelf\n\n\nAnyone can do research\n\nResearch can be brought into the project life cycle at any stage. And of course, anyone can do research \u2014 you don\u2019t need to be a researcher. Some of the main skills most designers possess are also key research skills: inquisitive nature, problem solving, playfulness, empathy, and so on.\n\nWe have a small team at Mark Boulton Design. Most of the team are designers and the rest of us focus on supporting the team and clients both in terms of billable work (research, content strategy, project management) as well as the non-billable things like finance and studio management.\n\nDespite my best intentions, in the past I\u2019ve undertaken research for clients in isolation \u2014 first being briefed by the design lead, carrying out the research and then delivering the findings back, trusting the design team to take the findings on board. This was often due to time and availability of resources.\n\nWe\u2019ve been trying hard to join up our processes and collaborate even more across the team. Undertaking heuristic or design reviews collaboratively; taking part in frequent critiques of our work and the work of others together; pairing a researcher and a designer to run interviews; workshopping results from interviews to come up with recommendations; working closely together on questionnaire design; shadowing each other on tasks that don\u2019t fall within our core skills. A little thing like moving our desks around has also helped us have more conversations that we can all be a part of.\n\n\n\nI\u2019ve come to the conclusion that my role as the research director at Mark Boulton Design is actually a facilitator of research. As well as carrying out research, I am responsible for ensuring that research happens consistently across the team. I am responsible for empowering and training our designers so they feel confident in carrying out their own user, audience or design research for clients. So they know what to look for, when to listen, when to probe and when to take note of something. So they know how to look for themes, how to synthesise insights from research and how to apply them to their work.\n\nBetter research leads to better design\n\nSo, are you a designer who can do research? Are you a researcher who can design? The best designers are a lucky combination of researcher and designer. If you\u2019re not one of those, look at ways of enhancing the skills you lack. Because there\u2019s no doubt in my mind, that becoming a better researcher will make you a better designer.\n\nGeneral resources: \n\n\n\tSeeing the Elephant by Louis Rosenfeld\n\tConnected UX by Aarron Walter\n\tBeyond Usability Testing by Devan Goldstein\n\tJust Enough Research by Erika Hall\n\tThe User Experience Team of One by Leah Buley\n\tUndercover User Experience Design by Cennydd Bowles and James Box\n\tA Pocket Guide to Psychology for Designers by Joe Leech\n\tA Pocket Guide to International User Research by Chui Chui Tan\n\tRemote Research by Nate Bolt and Tony Tulathimutte\n\tA Pocket Guide to Experiments for Designers by Colin McFarland", "year": "2013", "author": "Emma Boulton", "author_slug": "emmaboulton", "published": "2013-12-22T00:00:00+00:00", "url": "https://24ways.org/2013/bringing-design-and-research-closer-together/", "topic": "ux"} {"rowid": 94, "title": "Using Questionnaires for Design Research", "contents": "How do you ask the right questions? \n\nIn this article, I share a bunch of tips and practical advice on how to write and use your own surveys for design research.\n\nI\u2019m an audience researcher \u2013 I\u2019m not a designer or developer. I\u2019ve spent much of the last thirteen years working with audience data both in creative agencies and on the client-side. I\u2019m also a member of the Market Research Society. I run user surveys and undertake user research for our clients at the design studio I run with my husband \u2013 Mark Boulton Design.\n\nSo let\u2019s get started!\n\nWho are you designing for?\n\nGood web designers and developers appreciate the importance of understanding the audience they are designing or building a website or app for. I\u2019m assuming that because you are reading a quality publication like 24 ways that you fall into this category, and so I won\u2019t begin this article with a lecture.\n\nSuffice it to say, it\u2019s a good idea to involve research of some sort during the life cycle of every project you undertake. I don\u2019t just mean visual or competitor research, which of course is also very important. I mean looking at or finding your own audience or user data. Whether that be auditing existing data or research available from the client, carrying out user interviews, A/B testing, or conducting a simple questionnaire with users, any research is better than none. If you create personas as a design tool, they should always be based on research, so you will need to have plenty of data to hand for that.\n\nWhere do I start?\n\nIn the initial kick-off stages of a project, it\u2019s a good idea to start by asking your client (when working in-house you still have a client \u2013 you might even be the client) what research or audience data they have available. Some will have loads \u2013 analytics, surveys, focus groups and insights \u2013 from talking to customers. Some won\u2019t have much at all and you\u2019ll be hard pressed to find out much about the audience. It\u2019s best to review existing research first without rushing headlong into doing new research. Get a picture of what the data tells you and perhaps get this into a document \u2013 who, what, why and how are they using this website or app? What gaps are there in existing research? What else do you need to know? Then you can decide what else you need to do to plug these gaps. Think about the information first before deciding on the methodology. The rest of my article talks mostly about running self-completion online surveys. You can of course do face-to-face surveys, self-completion written questionnaires or phone polls, but I won\u2019t cover those here. That\u2019s for another article.\n\nWhy run a survey?\n\nSurveys are great for getting a broad picture of your audience. As long as they are designed carefully, you can create an overview of them, how they use the site and their opinions of it, with an idea of which parts of this picture are more important than others. By using a limited amount of open-ended questions, you can also get some more qualitative feedback or insights on your website or app. The clients we work with surprisingly often don\u2019t have much in the way of audience research available, even basic analytics, so I will often suggest running a short survey, just to create a picture of who is out there.\n\nOK, what should I do first?\n\nBefore you rush into writing questions, stop and think about what you\u2019re trying to find out. Remember being in school when you studied science and you had to propose a hypothesis? This could be a starting point \u2013 something to prove or disprove. Or, even better, write a research brief. It doesn\u2019t have to be long; it can be just a sentence that encapsulates what you\u2019re trying to do, like a good creative brief. For the purposes of this article, I created a short, slightly silly survey on Christmas and beliefs in Father Christmas.\n\nMy research brief was:\n\n\n\tTo find out more about people\u2019s beliefs about Father Christmas and their experiences of Christmas.\n\n\nInevitably, as you start thinking of what questions to ask, you will find that you go off at tangents or your client will want you to add in everything but the kitchen sink. In order for your questionnaire not to get too long and lose focus, you could write lists of what it is and what it\u2019s not. This is how I\u2019d apply it to my Christmas questionnaire example:\n\nWhat it is about\n\n\n\tHow people communicate with Father Christmas\n\tIf someone\u2019s background has affected their likelihood of believing in Father Christmas\n\n\nWhat it is not about\n\n\n\tWhat colour to change Father Christmas\u2019s coat to\n\tFather Christmas\u2019s elves\n\n\nLet\u2019s get down to business: the questions. \n\nKinds of questions\n\nThere are two basic kinds of questions: open-ended and closed. Closed questions limit answers by giving the respondent a number of predefined lists of options to choose from. Typically, these are multiple-choice questions with a list of responses. You can either select one or tick all that apply. Another useful type of closed question I often use is a rating scale, where a respondent can assess a situation along a continuum of values. These can also be useful as a measure of advocacy or strength of feeling about something. There is a standard measure called the Net Promoter score, which measures how likely someone is to recommend your product or service to a friend or acquaintance. It\u2019s a useful benchmark as you can compare your scores to others in a similar sector.\n\n\n\nOpen-ended questions often take the form of a statement which requires a response. Generally, respondents are given a text box to fill in. It\u2019s useful to limit this in some way so that people have an idea of how long the expected response should be; for example, a single line for an email address (Q18), or a larger text area for a longer response (Q6).\n\nIf you plan to send your survey out to a large number of people, I would suggest using mostly closed questions, unless you want to spend a long time wading through comments and hand-coded responses. I\u2019d always advise adding a general request at the end of a survey (\u2018Is there anything else you\u2019d like to tell us?\u2019). You\u2019d be surprised how many interesting and insightful comments people will add.\n\nThere are times when it\u2019s better to provide an open-ended text box rather than a predefined list makes assumptions about your audience\u2019s groupings. For example, we ran a short survey for our Gridset beta testers and rather than assume we knew who they were, we decided to ask an open-ended question: \u201cWhat is your current job title?\u201d\n\n\n\nThe analysis took quite a bit longer than responses using a predefined list, but it meant that we were able to make sure we didn\u2019t miss anyone. And next time we run a survey for Gridset, I can use the responses gathered from this survey to help create a predefined list to make analysis easier.\n\nWhat to ask\n\nThe questions to ask depend on what you want to know, but your brief and lists of what the survey is and isn\u2019t should help here. I always ask the design team and client to give me ideas of what they are interested in finding out, and combine this with a mix of new and standard questions I have used in other surveys. I find Survey Monkey\u2019s question bank a very useful source of example questions and help with tricky wording.\n\nI always include simple demographics so I can compare my results to the population at large or internet users as a whole \u2013 just going on age, gender and location can be quite illuminating. For example, with the Christmas survey, I can see that the respondents were typical of the online design and dev community, mainly young and male.\n\nIf appropriate, I add questions on disability, ethnic background, religion and community of interest. Questions about ethnicity, religion, sexual preference, disability and other sensitive subjects can feel awkward and difficult to ask. This is not a good reason to not ask them. Perhaps you\u2019re working for a public sector client, like a local council, so it\u2019s likely you will need to consider groups of people who maybe under-represented, who may have differing views to others, or who you need to look at specifically as a subset.\n\nHow to ask\n\nAlthough they may seem clunky and wordy, it\u2019s often best to use the census wording or professional body wording for such demographic questions. For example, I used the UK census 2011 wording for Wales on my Christmas questionnaire in my questions on religion [PDF] (Q16) and ethnicity [PDF] (Q17). I had to adapt them slightly for the Survey Monkey format \u2013 self-completion online, rather than pen and paper \u2013 which is why \u201cWhite Welsh\u201d came up as the first option for the ethnicity question. For similar questions for US audiences, try the Census Bureau website.\n\nWhen conducting a survey for a project that has a global audience, you need to consider who your primary audience is. For example, I recently created a questionnaire for a global news website. A large proportion of its audience is based in the USA, so I was careful to word things in a way Americans would find familiar. I used the US ethnic background census question wording and options, and looked at data for US competitor news websites to decide which to include.\n\nYou should also consider people whose first language isn\u2019t English. Working as an audience researcher at BBC Wales, every survey we did was bilingual. I\u2019ve also recently run a user survey in Arabic using Google Forms. During this project, we found that while Survey Monkey supports different languages, including Arabic, the text ran left to right with no option to change it to right to left \u2013 an essential when it comes to reading Arabic! If research is a deliverable in a client project, and you know you\u2019ll need to conduct it in a foreign language, always build in extra time for translation at both the questionnaire design and analysis stages. Make sure you also allow for plenty of checks. In this case we had to change to Google Forms after initially creating our survey with Survey Monkey to get the functionality we needed.\n\nLook and feel\n\nThink about the survey as another way your audience will experience your brand. Take care getting the tone of voice right. There are plenty of great articles and books out there about tone of voice \u2013 try Letting Go of the Words by Ginny Redish for starters, or Brand Language by Liz Doig. The basic rule of thumb is to sound like a human, and use clear and friendly language. If, like me, you are lucky enough to work with journalists or copy editors, you should ask for their help, particularly in the preamble, linking text and closing statements. I find it helpful to break my questions down into sections and to have a page for each. I then have an introductory piece of text for each section to guide the respondent through the survey.\n\nYou should also make sure you check with your designers how your survey looks \u2013 use a company logo and branding, and make the typography legible. Many survey apps like Survey Monkey and Google Forms have a progress bar. This is helpful for users to see how far through your survey they are. I generally time the survey and give an indication in the preamble: \u201cThis survey will only take five minutes of your time.\u201d\n\nYou also need to think about how you will technically serve the questionnaire. For example, will it be via email, social media, a pop-up or lightbox on your website, or (not recommended but possible) in an ad space?\n\nEthical considerations\n\nSomething else to think about are any local laws that govern how you collect and store data, such as the Data Protection Act in the UK. As a member of the Market Research Society, I am also obliged to consider its guidelines, but even if you\u2019re not, it\u2019s always a good idea to deal with personal data ethically.\n\nIf you collect personal data that can identify individuals, you must ask their permission to share it with others, and store it securely for no longer than two years. If you want to contact people afterwards, you must ask for their permission. If you ask for email addresses, as I did in question 18, you have a ready-made sample for a further survey, interviews or focus groups. Remember, you shouldn\u2019t survey people under sixteen years old without the permission of their parents or legal guardians, so if you know your website is likely to be used by children, you must ask for verification of age early on, and your survey should close someone answers that they are under sixteen. The ESOMAR guidelines for online research [PDF] are well worth reading, as they go into detail about such issues, as well as privacy guidelines \u2013 using cookies, storing IP addresses, and so on.\n\nTools\n\nUnless you work in-house and have proprietary software, or at a market research agency and you\u2019re using specialist software such as Snap or IBM SPSS Statistics (previously just SPSS), you will need to use a good tool to run your survey, collect your responses and, ideally, help with the analysis. I like Survey Monkey because of the question bank and analysis tools. The software graphs your results and does simple cross-tabbing and filtering. What this means is you can slice the data in more interesting ways and delve a bit deeper. For example, in the Gridset questionnaire I mentioned earlier, I cross-tabbed responses to questions against whether a person worked in-house, for an agency or as a freelancer. \n\nOther well known online tools that I also use from time to time are Wufoo and Google Forms. Smart Surveys is a similar service to Survey Monkey and it\u2019s used by many leading brands in the UK. Snap Surveys mentioned above is a well-established player in the market research scene, used a lot for face-to-face surveys and also on tablets and smartphones.\n\nAnalysis\n\nAnalysis is often overlooked but is as important as the design of the questionnaire. Don\u2019t just rely on looking at the summary report and charts generated as standard by your form or survey software. Spend time with your data. Spend at least a week now and then if you can, looking at the data. Keep coming back to it and tweaking or cutting it a different way to see if there are any different pictures. Slice it up in different ways to reveal new insights. Here is the data from my dummy survey (apart from the open-ended responses). \n\nFor open-ended questions, you can analyse collaboratively. Print and cut out the open-ended responses and do a cluster analysis or affinity sort with a colleague. \n\n\n\nDiscussing the comments helps you to understand them. You will also find the design team are more likely to buy into the research as they have uncovered the insights for themselves. Always make sure to treat open-ended responses sensitively and don\u2019t share anything publicly in a way that identifies the respondent.\n\nWrite a report\n\nNever hand over a dataset to your client without a summary of the findings. Data on its own can be skewed to suit the reader\u2019s needs, and not everyone is able to find the story in a dataset. Even if it\u2019s not a deliverable, it\u2019s always a good idea to capture your findings in a report of some sort. Use graphs sparingly to show really interesting things or to aid the reader\u2019s understanding. I have written a quick dummy report using the data from the Christmas questionnaire so you can see how it\u2019s done.\n\nI highly recommend Brian Suda\u2019s book A Practical Guide to Designing with Data for tips on how to present data effectively, but that\u2019s a subject that benefits a whole article (indeed book) in itself. \n\nI am not a designer. I am a researcher, so I never write design recommendations in a report unless they have been talked about or suggested by the designers I work with. More often, I write up the results and we talk about them and what impact they have on the project or design. Often they lead to more questions or further research.\n\nSo that\u2019s it: a brief introduction to using questionnaires for design research. Here\u2019s a quick summary to remind you what I have talked about, and a list of resources if you\u2019re interested in reading further.\n\nTop 10 things to remember when using questionnaires for design research:\n\n\n\tStart by auditing existing research to identify gaps in data.\n\tWrite a research brief. Work out exactly what you\u2019re trying to find out \u2013 what is the survey about, and what is it not about?\n\tThe two basic kinds of questions are open-ended and closed.\n\tClosed questions limit responses by giving the respondent a number of predefined lists of options to choose from (multiple choice, rating scales, and so on).\n\tOpen-ended questions are often in the form of a statement which requires a response. Always ask one at the end of a questionnaire.\n\tAlways include simple demographics to enable you to compare your sample against the population in general.\n\tIt\u2019s best to use official census or professional body wording for questions on ethnicity, disability and religion.\n\tBe sure to think carefully about your tone of voice and the look of your questionnaire.\n\tPay attention to guidelines and laws on storing personal data, cookies and privacy.\n\tInvest plenty of time in analysis and report writing. Don\u2019t just look at the obvious \u2013 dig deep for more interesting insights.\n\n\nSome useful resources for further study\n\nOnline research\n\n\n\tDesign Research: Methods and Perspectives edited by Brenda Laurel\n\tOnline Research Essentials by Brenda Russell and John Purcell\n\tHandbook of Online and Social Media Research by Ray Poynter\n\tESOMAR guidelines for online research [PDF]\n\tOnline questionnaires\n\n\nMarket research books on questionnaire design\n\n\n\tUsing Questionnaires in Small-Scale Research: A Beginner\u2019s Guide by Pamela Munn\n\tQuestionnaire Design by A N Oppenheim\n\tDeveloping a Questionnaire by Bill Gillham", "year": "2012", "author": "Emma Boulton", "author_slug": "emmaboulton", "published": "2012-12-14T00:00:00+00:00", "url": "https://24ways.org/2012/using-questionnaires-for-design-research/", "topic": "business"} {"rowid": 15, "title": "Git for Grown-ups", "contents": "You are a clever and talented person. You create beautiful designs, or perhaps you have architected a system that even my cat could use. Your peers adore you. Your clients love you. But, until now, you haven\u2019t *&^#^! been able to make Git work. It makes you angry inside that you have to ask your co-worker, again, for that *&^#^! command to upload your work.\n\nIt\u2019s not you. It\u2019s Git. Promise.\n\nYes, this is an article about the popular version control system, Git. But unlike just about every other article written about Git, I\u2019m not going to give you the top five commands that you need to memorize; and I\u2019m not going to tell you all your problems would be solved if only you were using this GUI wrapper or that particular workflow. You see, I\u2019ve come to a grand realization: when we teach Git, we\u2019re doing it wrong.\n\nLet me back up for a second and tell you a little bit about the field of adult education. (Bear with me, it gets good and will leave you feeling both empowered and righteous.) Andragogy, unlike pedagogy, is a learner-driven educational experience. There are six main tenets to adult education: \n\n\n\tAdults prefer to know why they are learning something.\n\tThe foundation of the learning activities should include experience.\n\tAdults prefer to be able to plan and evaluate their own instruction.\n\tAdults are more interested in learning things which directly impact their daily activities.\n\tAdults prefer learning to be oriented not towards content, but towards problems.\n\tAdults relate more to their own motivators than to external ones.\n\n\nNowhere in this list does it include \u201cmemorize the five most popular Git commands\u201d. And yet this is how we teach version control: init, add, commit, branch, push. You\u2019re an expert! Sound familiar? In the hierarchy of learning, memorizing commands is the lowest, or most basic, form of learning. At the peak of learning you are able to not just analyze and evaluate a problem space, but create your own understanding in relation to your existing body of knowledge.\n\n\u201cFine,\u201d I can hear you saying to yourself. \u201cBut I\u2019m here to learn about version control.\u201d Right you are! So how can we use this knowledge to master Git? First of all: I give you permission to use Git as a tool. A tool which you control and which you assign tasks to. A tool like a hammer, or a saw. Yes, your mastery of your tools will shape the kinds of interactions you have with your work, and your peers. But it\u2019s yours to control. Git was written by kernel developers for kernel development. The web world has adopted Git, but it is not a tool designed for us and by us. It\u2019s no Sass, y\u2019know? Git wasn\u2019t developed out of our frustration with managing CSS files in an increasingly complex ecosystem of components and atomic design. So, as you work through the next part of this article, give yourself a bit of a break. We\u2019re in this together, and it\u2019s going to be OK.\n\nWe\u2019re going to do a little activity. We\u2019re going to create your perfect Git cheatsheet.\n\nI want you to start by writing down a list of all the people on your code team. This list may include:\n\n\n\tdevelopers\n\tdesigners\n\tproject managers\n\tclients\n\n\nNext, I want you to write down a list of all the ways you interact with your team. Maybe you\u2019re a solo developer and you do all the tasks. Maybe you only do a few things. But I want you to write down a list of all the tasks you\u2019re actually responsible for. For example, my list looks like this:\n\n\n\twriting code\n\treviewing code\n\tpublishing tested code to your server(s)\n\ttroubleshooting broken code\n\n\nThe next list will end up being a series of boxes in a diagram. But to start, I want you to write down a list of your tools and constraints. This list potentially has a lot of noun-like items and verb-like items:\n\n\n\tcode hosting system (Bitbucket? GitHub? Unfuddle? self-hosted?)\n\tserver ecosystem (dev/staging/live)\n\tautomated testing systems or review gates\n\tautomated build systems (that Jenkins dude people keep referring to)\n\n\nBrilliant! Now you\u2019ve got your actors and your actions, it\u2019s time to shuffle them into a diagram. There are many popular workflow patterns. None are inherently right or wrong; rather, some are more or less appropriate for what you are trying to accomplish.\n\nCentralized workflow\n\nEveryone saves to a single place. This workflow may mean no version control, or a very rudimentary version control system which only ever has a single copy of the work available to the team at any point in time.\n\n \n\nBranching workflow\n\nEveryone works from a copy of the same place, merging their changes into the main copy as their work is completed. Think of the branches as a motorcycle sidecar: they\u2019re along for the ride and probably cannot exist in isolation of the main project for long without serious danger coming to the either the driver or sidecar passenger. Branches are a fundamental concept in version control \u2014 they allow you to work on new features, bug fixes, and experimental changes within a single repository, but without forcing the changes onto others working from the same branch.\n\n \n\nForking workflow\n\nEveryone works from their own, independent repository. A fork is an exact duplicate of a repository that a developer can make their own changes to. It can be kept up to date with additional changes made in other repositories, but it cannot force its changes onto another\u2019s repository. A fork is a complete repository which can use its own workflow strategies. If developers wish to merge their work with the main project, they must make a request of some kind (submit a patch, or a pull request) which the project collaborators may choose to adopt or reject. This workflow is popular for open source projects as it enforces a review process.\n\n \n\nGitflow workflow\n\nA specific workflow convention which includes five streams of parallel coding efforts: master, development, feature branches, release branches, and hot fixes. This workflow is often simplified down to a few elements by web teams, but may be used wholesale by software product teams. The original article describing this workflow was written by Vincent Driessen back in January 2010.\n\n \n\nBut these workflows aren\u2019t about you yet, are they? So let\u2019s make the connections.\n\nFrom the list of people on your team you identified earlier, draw a little circle. Give each of these circles some eyes and a smile. Now I want you to draw arrows between each of these people in the direction that code (ideally) flows. Does your designer create responsive prototypes which are pushed to the developer? Draw an arrow to represent this.\n\nChances are high that you don\u2019t just have people on your team, but you also have some kind of infrastructure. Hopefully you wrote about it earlier. For each of the servers and code repositories in your infrastructure, draw a square. Now, add to your diagram the relationships between the people and each of the machines in the infrastructure. Who can deploy code to the live server? How does it really get there? I bet it goes through some kind of code hosting system, such as GitHub. Draw in those arrows.\n\nBut wait!\n\nThe code that\u2019s on your development machine isn\u2019t the same as the live code. This is where we introduce the concept of a branch in version control. In Git, a repository contains all of the code (sort of). A branch is a fragment of the code that has been worked on in isolation to the other branches within a repository. Often branches will have elements in common. When we compare two (or more) branches, we are asking about the difference (or diff) between these two slivers. Often the master branch is used on production, and the development branch is used on our dev server. The difference between these two branches is the untested code that is not yet deployed.\n\nOn your diagram, see if you can colour-code according to the branch names at each of the locations within your infrastructure. You might find it useful to make a few different copies of the diagram to isolate each of the tasks you need to perform. For example: our team has a peer review process that each branch must go through before it is merged into the shared development branch.\n\nFinally, we are ready to add the Git commands necessary to make sense of the arrows in our diagram. If we are bringing code to our own workstation we will issue one of the following commands: clone (the first time we bring code to our workstation) or pull. Remembering that a repository contains all branches, we will issue the command checkout to switch from one branch to another within our own workstation. If we want to share a particular branch with one of our team mates, we will push this branch back to the place we retrieved it from (the origin). Along each of the arrows in your diagram, write the name of the command you are are going to use when you perform that particular task.\n\n \n\nFrom here, it\u2019s up to you to be selfish. Before asking Git what command it would like you to use, sketch the diagram of what you want. Git is your tool, you are not Git\u2019s tool. Draw the diagram. Communicate your tasks with your team as explicitly as you can. Insist on being a selfish adult learner \u2014 demand that others explain to you, in ways that are relevant to you, how to do the things you need to do today.", "year": "2013", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2013-12-04T00:00:00+00:00", "url": "https://24ways.org/2013/git-for-grownups/", "topic": "code"} {"rowid": 31, "title": "Dealing with Emergencies in Git", "contents": "The stockings were hung by the chimney with care,\nIn hopes that version control soon would be there.\n\nThis summer I moved to the UK with my partner, and the onslaught of the Christmas holiday season began around the end of October (October!). It does mean that I\u2019ve had more than a fair amount of time to come up with horrible Git analogies for this article. Analogies, metaphors, and comparisons help the learner hook into existing mental models about how a system works. They only help, however, if the learner has enough familiarity with the topic at hand to make the connection between the old and new information.\n\nLet\u2019s start by painting an updated version of Clement Clarke Moore\u2019s Christmas living room. Empty stockings are hung up next to the fireplace, waiting for Saint Nicholas to come down the chimney and fill them with small treats. Holiday treats are scattered about. A bowl of mixed nuts, the holiday nutcracker, and a few clementines. A string of coloured lights winds its way up an evergreen.\n\nPerhaps a few of these images are familiar, or maybe they\u2019re just settings you\u2019ve seen in a movie. It doesn\u2019t really matter what the living room looks like though. The important thing is to ground yourself in your own experiences before tackling a new subject. Instead of trying to brute-force your way into new information, as an adult learner constantly ask yourself: \u2018What is this like? What does this remind me of? What do I already know that I can use to map out this new territory?\u2019 It\u2019s okay if the map isn\u2019t perfect. As you refine your understanding of a new topic, you\u2019ll outgrow the initial metaphors, analogies, and comparisons.\n\nWith apologies to Mr. Moore, let\u2019s give it a try.\n\nGetting Interrupted in Git\n\nWhen on the roof there arose such a clatter!\n\nYou\u2019re happily working on your software project when all of a sudden there are freaking reindeer on the roof! Whatever you\u2019ve been working on is going to need to wait while you investigate the commotion.\n\nIf you\u2019ve got even a little bit of experience working with Git, you know that you cannot simply change what you\u2019re working on in times of emergency. If you\u2019ve been doing work, you have a dirty working directory and you cannot change branches, or push your work to a remote repository while in this state.\n\nUp to this point, you\u2019ve probably dealt with emergencies by making a somewhat useless commit with a message something to the effect of \u2018switching branches for a sec\u2019. This isn\u2019t exactly helpful to future you, as commits should really contain whole ideas of completed work. If you get interrupted, especially if there are reindeer on the roof, the chances are very high that you weren\u2019t finished with what you were working on.\n\nYou don\u2019t need to make useless commits though. Instead, you can use the stash command. This command allows you to temporarily set aside all of your changes so that you can come back to them later. In this sense, stash is like setting your book down on the side table (or pushing the cat off your lap) so you can go investigate the noise on the roof. You aren\u2019t putting your book away though, you\u2019re just putting it down for a moment so you can come back and find it exactly the way it was when you put it down.\n\nLet\u2019s say you\u2019ve been working in the branch waiting-for-st-nicholas, and now you need to temporarily set aside your changes to see what the noise was on the roof:\n\n$ git stash\n\nAfter running this command, all uncommitted work will be temporarily removed from your working directory, and you will be returned to whatever state you were in the last time you committed your work.\n\nWith the book safely on the side table, and the cat safely off your lap, you are now free to investigate the noise on the roof. It turns out it\u2019s not reindeer after all, but just your boss who thought they\u2019d help out by writing some code on the project you\u2019ve been working on. Bless. Rolling your eyes, you agree to take a look and see what kind of mischief your boss has gotten themselves into this time.\n\nYou fetch an updated list of branches from the remote repository, locate the branch your boss had been working on, and checkout a local copy:\n\n$ git fetch\n$ git branch -r\n$ git checkout -b helpful-boss-branch origin/helpful-boss-branch\n\nYou are now in a local copy of the branch where you are free to look around, and figure out exactly what\u2019s going on.\n\nYou sigh audibly and say, \u2018Okay. Tell me what was happening when you first realised you\u2019d gotten into a mess\u2019 as you look through the log messages for the branch.\n\n$ git log --oneline\n$ git log\n\nBy using the log command you will be able to review the history of the branch and find out the moment right before your boss ended up stuck on your roof.\n\nYou may also want to compare the work your boss has done to the main branch for your project. For this article, we\u2019ll assume the main branch is named master.\n\n$ git diff master\n\nLooking through the commits, you may be able to see that things started out okay but then took a turn for the worse.\n\nChecking out a single commit\n\nUsing commands you\u2019re already familiar with, you can rewind through history and take a look at the state of the code at any moment in time by checking out a single commit, just like you would a branch.\n\nUsing the log command, locate the unique identifier (commit hash) of the commit you want to investigate. For example, let\u2019s say the unique identifier you want to checkout is 25f6d7f.\n\n$ git checkout 25f6d7f\n\nNote: checking out '25f6d7f'.\n\nYou are in 'detached HEAD' state. You can look around,\nmake experimental changes and commit them, and you can\ndiscard any commits you make in this state without\nimpacting any branches by performing another checkout.\n\nIf you want to create a new branch to retain commits you create, you may do so (now or later) by using @-b@ with the checkout command again. Example:\n\n$ git checkout -b new_branch_name\n\nHEAD is now at 25f6d7f... Removed first paragraph.\n\nThis is usually where people start to panic. Your boss screwed something up, and now your HEAD is detached. Under normal circumstances, these words would be a very good reason to panic.\n\nTake a deep breath. Nothing bad is going to happen. Being in a detached HEAD state just means you\u2019ve temporarily disconnected from a known chain of events. In other words, you\u2019re currently looking at the middle of a story (or branch) about what happened \u2013 and you\u2019re not at the endpoint for this particular story.\n\nGit allows you to view the history of your repository as a timeline (technically it\u2019s a directed acyclic graph). When you make commits which are not associated with a branch, they are essentially inaccessible once you return to a known branch. If you make commits while you\u2019re in a detached HEAD state, and then try to return to a known branch, Git will give you a warning and tell you how to save your work.\n\n$ git checkout master\n\nWarning: you are leaving 1 commit behind, not connected to\nany of your branches:\n\n 7a85788 Your witty holiday commit message.\n\nIf you want to keep them by creating a new branch, this may be a good time to do so with:\n\n$ git branch new_branch_name 7a85788\n\nSwitched to branch 'master'\nYour branch is up-to-date with 'origin/master'.\n\nSo, if you want to save the commits you\u2019ve made while in a detached HEAD state, you simply need to put them on a new branch.\n\n$ git branch saved-headless-commits 7a85788\n\nWith this trick under your belt, you can jingle around in history as much as you\u2019d like. It\u2019s not like sliding around on a timeline though. When you checkout a specific commit, you will only have access to the history from that point backwards in time. If you want to move forward in history, you\u2019ll need to move back to the branch tip by checking out the branch again.\n\n$ git checkout helpful-boss-branch\n\nYou\u2019re now back to the present. Your HEAD is now pointing to the endpoint of a known branch, and so it is no longer detached. Any changes you made while on your adventure are safely stored in a new branch, assuming you\u2019ve followed the instructions Git gave you. That wasn\u2019t so scary after all, now, was it?\n\nBack to our reindeer problem.\n\nIf your boss is anything like the bosses I\u2019ve worked with, chances are very good that at least some of their work is worth salvaging. Depending on how your repository is structured, you\u2019ll want to capture the good work using one of several different methods.\n\nBack in the living room, we\u2019ll use our bowl of nuts to illustrate how you can rescue a tiny bit of work.\n\nSaving just one commit\n\nAbout that bowl of nuts. If you\u2019re like me, you probably had some favourite kinds of nuts from an assorted collection. Walnuts were generally the most satisfying to crack open. So, instead of taking the entire bowl of nuts and dumping it into a stocking (merging the stocking and the bowl of nuts), we\u2019re just going to pick out one nut from the bowl. In Git terms, we\u2019re going to cherry-pick a commit and save it to another branch.\n\nFirst, checkout the main branch for your development work. From this branch, create a new branch where you can copy the changes into.\n\n$ git checkout master\n$ git checkout -b rescue-the-boss\n\nFrom your boss\u2019s branch, helpful-boss-branch locate the commit you want to keep.\n\n$ git log --oneline helpful-boss-branch\n\nLet\u2019s say the commit ID you want to keep is e08740b. From your rescue branch, use the command cherry-pick to copy the changes into your current branch.\n\n$ git cherry-pick e08740b\n\nIf you review the history of your current branch again, you will see you now also have the changes made in the commit in your boss\u2019s branch.\n\nAt this point you might need to make a few additional fixes to help your boss out. (You\u2019re angling for a bonus out of all this. Go the extra mile.) Once you\u2019ve made your additional changes, you\u2019ll need to add that work to the branch as well.\n\n$ git add [filename(s)]\n$ git commit -m \"Building on boss's work to improve feature X.\"\n\nGo ahead and test everything, and make sure it\u2019s perfect. You don\u2019t want to introduce your own mistakes during the rescue mission!\n\nUploading the fixed branch\n\nThe next step is to upload the new branch to the remote repository so that your boss can download it and give you a huge bonus for helping you fix their branch.\n\n$ git push -u origin rescue-the-boss\n\nCleaning up and getting back to work\n\nWith your boss rescued, and your bonus secured, you can now delete the local temporary branches.\n\n$ git branch --delete rescue-the-boss\n$ git branch --delete helpful-boss-branch\n\nAnd settle back into your chair to wait for Saint Nicholas with your book, your branch, and possibly your cat.\n\n$ git checkout waiting-for-st-nicholas\n$ git stash pop\n\nYour working directory has been returned to exactly the same state you were in at the beginning of the article.\n\nHaving fun with analogies\n\nI\u2019ve had a bit of fun with analogies in this article. But sometimes those little twists on ideas can really help someone pick up a new idea (git stash: it\u2019s like when Christmas comes around and everyone throws their fashion sense out the window and puts on a reindeer sweater for the holiday party; or git bisect: it\u2019s like trying to find that one broken light on the string of Christmas lights). It doesn\u2019t matter if the analogy isn\u2019t perfect. It\u2019s just a way to give someone a temporary hook into a concept in a way that makes the concept accessible while the learner becomes comfortable with it. As the learner\u2019s comfort increases, the analogies can drop away, making room for the technically correct definition of how something works.\n\nOr, if you\u2019re like me, you can choose to never grow old and just keep mucking about in the analogies. I\u2019d argue it\u2019s a lot more fun to play with a string of Christmas lights and some holiday cheer than a directed acyclic graph anyway.", "year": "2014", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2014-12-02T00:00:00+00:00", "url": "https://24ways.org/2014/dealing-with-emergencies-in-git/", "topic": "code"} {"rowid": 52, "title": "Git Rebasing: An Elfin Workshop Workflow", "contents": "This year Santa\u2019s helpers have been tasked with making a garland. It\u2019s a pretty simple task: string beads onto yarn in a specific order. When the garland reaches a specific length, add it to the main workshop garland. Each elf has a specific sequence they\u2019re supposed to chain, which is given to them via a work order. (This is starting to sound like one of those horrible calculus problems. I promise it isn\u2019t. It\u2019s worse; it\u2019s about Git.)\nFor the most part, the system works really well. The elves are able to quickly build up a shared chain because each elf specialises on their own bit of garland, and then links the garland together. Because of this they\u2019re able to work independently, but towards the common goal of making a beautiful garland.\nAt first the elves are really careful with each bead they put onto the garland. They check with one another before merging their work, and review each new link carefully. As time crunches on, the elves pour a little more cheer into the eggnog cooler, and the quality of work starts to degrade. Tensions rise as mistakes are made and unkind words are said. The elves quickly realise they\u2019re going to need a system to change the beads out when mistakes are made in the chain.\nThe first common mistake is not looking to see what the latest chain is that\u2019s been added to the main garland. The garland is huge, and it sits on a roll in one of the corners of the workshop. It\u2019s a big workshop, so it is incredibly impractical to walk all the way to the roll to check what the last link is on the chain. The elves, being magical, have set up a monitoring system that allows them to keep a local copy of the main garland at their workstation. It\u2019s an imperfect system though, so the elves have to request a manual refresh to see the latest copy. They can request a new copy by running the command\ngit pull --rebase=preserve\n(They found that if they ran git pull on its own, they ended up with weird loops of extra beads off the main garland, so they\u2019ve opted to use this method.) This keeps the shared garland up to date, which makes things a lot easier. A visualisation of the rebase process is available.\nThe next thing the elves noticed is that if they worked on the main workshop garland, they were always running into problems when they tried to share their work back with the rest of the workshop. It was fine if they were working late at night by themselves, but in the middle of the day, it was horrible. (I\u2019ve been asked not to talk about that time the fight broke out.) Instead of trying to share everything on their local copy of the main garland, the elves have realised it\u2019s a lot easier to work on a new string and then knot this onto the main garland when their pattern repeat is finished. They generate a new string by issuing the following commands:\ngit checkout master\ngit checkout -b 1234_pattern-name\n1234 represents the work order number and pattern-name describes the pattern they\u2019re adding. Each bead is then added to the new link (git add bead.txt) and locked into place (git commit). Each elf repeats this process until the sequence of beads described in the work order has been added to their mini garland.\nTo combine their work with the main garland, the elves need to make a few decisions. If they\u2019re making a single strand, they issue the following commands:\ngit checkout master\ngit merge --ff-only 1234_pattern-name\nTo share their work they publish the new version of the main garland to the workshop spool with the command git push origin master.\nSometimes this fails. Sharing work fails because the workshop spool has gotten new links added since the elf last updated their copy of the main workshop spool. This makes the elves both happy and sad. It makes them happy because it means the other elves have been working too, but it makes them sad because they now need to do a bit of extra work to close their work order. \nTo update the local copy of the workshop spool, the elf first unlinks the chain they just linked by running the command:\ngit reset --merge ORIG_HEAD\nThis works because the garland magic notices when the elves are doing a particularly dangerous thing and places a temporary, invisible bookmark to the last safe bead in the chain before the dangerous thing happened. The garland no longer has the elf\u2019s work, and can be updated safely. The elf runs the command git pull --rebase=preserve and the changes all the other elves have made are applied locally.\nWith these new beads in place, the elf now has to restring their own chain so that it starts at the right place. To do this, the elf turns back to their own chain (git checkout 1234_pattern-name) and runs the command git rebase master. Assuming their bead pattern is completely unique, the process will run and the elf\u2019s beads will be restrung on the tip of the main workshop garland.\nSometimes the magic fails and the elf has to deal with merge conflicts. These are kind of annoying, so the elf uses a special inspector tool to figure things out. The elf opens the inspector by running the command git mergetool to work through places where their beads have been added at the same points as another elf\u2019s beads. Once all the conflicts are resolved, the elf saves their work, and quits the inspector. They might need to do this a few times if there are a lot of new beads, so the elf has learned to follow this update process regularly instead of just waiting until they\u2019re ready to close out their work order.\nOnce their link is up to date, the elf can now reapply their chain as before, publish their work to the main workshop garland, and close their work order:\ngit checkout master\ngit merge --ff-only 1234_pattern-name\ngit push origin master\nGenerally this process works well for the elves. Sometimes, though, when they\u2019re tired or bored or a little drunk on festive cheer, they realise there\u2019s a mistake in their chain of beads. Fortunately they can fix the beads without anyone else knowing. These tools can be applied to the whole workshop chain as well, but it causes problems because the magic assumes that elves are only ever adding to the main chain, not removing or reordering beads on the fly. Depending on where the mistake is, the elf has a few different options.\nLet\u2019s pretend the elf has a sequence of five beads she\u2019s been working on. The work order says the pattern should be red-blue-red-blue-red.\n\nIf the sequence of beads is wrong (for example, blue-blue-red-red-red), the elf can remove the beads from the chain, but keep the beads in her workstation using the command git reset --soft HEAD~5.\n\nIf she\u2019s been using the wrong colours and the wrong pattern (for example, green-green-yellow-yellow-green), she can remove the beads from her chain and discard them from her workstation using the command git reset --hard HEAD~5.\n\nIf one of the beads is missing (for example, red-blue-blue-red), she can restring the beads using the first method, or she can use a bit of magic to add the missing bead into the sequence.\n\nUsing a tool that\u2019s a bit like orthoscopic surgery, she first selects a sequence of beads which contains the problem. A visualisation of this process is available.\nStart the garland surgery process with the command:\ngit rebase --interactive HEAD~4\nA new screen comes up with the following information (the oldest bead is on top):\npick c2e4877 Red bead\npick 9b5555e Blue bead\npick 7afd66b Blue bead\npick e1f2537 Red bead\nThe elf adjusts the list, changing \u201cpick\u201d to \u201cedit\u201d next to the first blue bead:\npick c2e4877 Red bead\nedit 9b5555e Blue bead\npick 7afd66b Blue bead\npick e1f2537 Red bead\nShe then saves her work and quits the editor. The garland magic has placed her back in time at the moment just after she added the first blue bead.\n\nShe needs to manually fix up her garland to add the new red bead. If the beads were files, she might run commands like vim beads.txt and edit the file to make the necessary changes.\nOnce she\u2019s finished her changes, she needs to add her new bead to the garland (git add --all) and lock it into place (git commit). This time she assigns the commit message \u201cRed bead \u2013 added\u201d so she can easily find it.\n\nThe garland magic has replaced the bead, but she still needs to verify the remaining beads on the garland. This is a mostly automatic process which is started by running the command git rebase --continue.\nThe new red bead has been assigned a position formerly held by the blue bead, and so the elf must deal with a merge conflict. She opens up a new program to help resolve the conflict by running git mergetool.\n\nShe knows she wants both of these beads in place, so the elf edits the file to include both the red and blue beads.\n\nWith the conflict resolved, the elf saves her changes and quits the mergetool.\nBack at the command line, the elf checks the status of her work using the command git status.\nrebase in progress; onto 4a9cb9d\nYou are currently rebasing branch '2_RBRBR' on '4a9cb9d'.\n (all conflicts fixed: run \"git rebase --continue\")\n\nChanges to be committed:\n (use \"git reset HEAD ...\" to unstage)\n\n modified: beads.txt\n\nUntracked files:\n (use \"git add ...\" to include in what will be committed)\n\n beads.txt.orig\nShe removes the file added by the mergetool with the command rm beads.txt.orig and commits the edits she just made to the bead file using the commands:\ngit add beads.txt\ngit commit --message \"Blue bead -- resolved conflict\"\n\nWith the conflict resolved, the elf is able to continue with the rebasing process using the command git rebase --continue. There is one final conflict the elf needs to resolve. Once again, she opens up the visualisation tool and takes a look at the two conflicting files.\n\nShe incorporates the changes from the left and right column to ensure her bead sequence is correct.\n\nOnce the merge conflict is resolved, the elf saves the file and quits the mergetool. Once again, she cleans out the backup file added by the mergetool (rm beads.txt.orig) and commits her changes to the garland:\ngit add beads.txt\ngit commit --message \"Red bead -- resolved conflict\"\nand then runs the final verification steps in the rebase process (git rebase --continue).\n\nThe verification process runs through to the end, and the elf checks her work using the command git log --oneline.\n9269914 Red bead -- resolved conflict\n4916353 Blue bead -- resolved conflict\naef0d5c Red bead -- added\n9b5555e Blue bead\nc2e4877 Red bead\nShe knows she needs to read the sequence from bottom to top (the oldest bead is on the bottom). Reviewing the list she sees that the sequence is now correct.\nSometimes, late at night, the elf makes new copies of the workshop garland so she can play around with the bead sequencer just to see what happens. It\u2019s made her more confident at restringing beads when she\u2019s found real mistakes. And she doesn\u2019t mind helping her fellow elves when they run into trouble with their beads. The sugar cookies they leave her as thanks don\u2019t hurt either. If you would also like to play with the bead sequencer, you can get a copy of the branches the elf worked.\n\nOur lessons from the workshop:\n\nBy using rebase to update your branches, you avoid merge commits and keep a clean commit history.\nIf you make a mistake on one of your local branches, you can use reset to take commits off your branch. If you want to save the work, but uncommit it, add the parameter --soft. If you want to completely discard the work, use the parameter, --hard.\nIf you have merged working branch changes to the local copy of your master branch and it is preventing you from pushing your work to a remote repository, remove these changes using the command reset with the parameter --merge ORIG_HEAD before updating your local copy of the remote master branch.\nIf you want to make a change to work that was committed a little while ago, you can use the command rebase with the parameter --interactive. You will need to include how many commits back in time you want to review.", "year": "2015", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2015-12-07T00:00:00+00:00", "url": "https://24ways.org/2015/git-rebasing/", "topic": "code"} {"rowid": 262, "title": "Be the Villain", "contents": "Inclusive Design is the practice of making products and services accessible to, and usable by as many people as reasonably possible without the need for specialized accommodations. The practice was popularized by author and User Experience Design Director Kat Holmes. If getting you to discover her work is the only thing this article succeeds in doing then I\u2019ll consider it a success.\nAs a framework for creating resilient solutions to problems, Inclusive Design is incredible. However, the aimless idealistic aspirations many of its newer practitioners default to can oftentimes run into trouble. Without outlining concrete, actionable outcomes that are then vetted by the people you intend to serve, there is the potential to do more harm than good. \nWhen designing, you take a user flow and make sure it can\u2019t be broken. Ensuring that if something is removed, it can be restored. Or that something editable can also be updated at a later date\u2014you know, that kind of thing. What we want to do is avoid surprises. Much like a water slide with a section of pipe missing, a broken flow forcibly ejects a user, to great surprise and frustration. Interactions within a user flow also have to be small enough to be self-contained, so as to avoid creating a none pizza with left beef scenario.\nLately, I\u2019ve been thinking about how to expand on this practice. Watertight user flows make for a great immediate experience, but it\u2019s all too easy to miss the forest for the trees when you\u2019re a product designer focused on cranking out features. \nWhat I\u2019m concerned about is while to trying to envision how a user flow could be broken, you also think about how it could be subverted. In addition to preventing the removal of a section of water slide, you also keep someone from mugging the user when they shoot out the end.\nIf you pay attention, you\u2019ll start to notice this subversion with increasing frequency:\n\nDomestic abusers using internet-controlled devices to spy on and control their partner.\nZealots tanking a business\u2019 rating on Google because its owners spoke out against unchecked gun violence.\nForcing people to choose between TV or stalking because the messaging center portion of a cable provider\u2019s entertainment package lacks muting or blocking features.\nWhite supremacists tricking celebrities into endorsing anti-Semitic conspiracy theories.\nFacebook repeatedly allowing housing, credit, and employment advertisers to discriminate against users by their race, ability, and religion.\nWhite supremacists also using a video game chat service as a recruiting tool.\nThe unchecked harassment of minors on Instagram.\nSwatting.\n\nIf I were to guess why we haven\u2019t heard more about this problem, I\u2019d say that optimistically, people have settled out of court. Pessimistically, it\u2019s most likely because we ignore, dismiss, downplay, and suppress those who try to bring it to our attention. \nSubverted design isn\u2019t the practice of employing Dark Patterns to achieve your business goals. If you are not familiar with the term, Dark Patterns are the use of cheap user interface tricks and psychological manipulation to get users to act against their own best interests. User Experience consultant Chris Nodder wrote Evil By Design, a fantastic book that unpacks how to detect and think about them, if you\u2019re interested in this kind of thing\nSubverted design also isn\u2019t beholden design, or simple lack of attention. This phenomenon isn\u2019t even necessarily premeditated. I think it arises from na\u00efve (or willfully ignorant) design decisions being executed at a historically unprecedented pace and scale. These decisions are then preyed on by the shrewd and opportunistic, used to control and inflict harm on the undeserving. Have system, will game.\nThis is worth discussing. As the field of design continues to industrialize empathy, it also continues to ignore the very established practice of threat modeling. Most times, framing user experience in terms of how to best funnel people into a service comes with an implicit agreement that the larger system that necessitates the service is worth supporting. \nTo achieve success in the eyes of their superiors, designers may turn to emotional empathy exercises. By projecting themselves into the perceived surface-level experiences of others, they play-act at understanding how to nudge their targeted demographics into a conversion funnel. This roleplaying exercise has the effect of scoping concerns to the immediate, while simultaneously reinforcing the idea of engagement at all cost within the identified demographic.\nThe thing is, pure engagement leaves the door wide open for bad actors. Even within the scope of a limited population, the assumption that everyone entering into the funnel is acting with good intentions is a poor one. Security researchers, network administrators, and other professionals who practice threat modeling understand that the opposite is true. By preventing everyone save for well-intentioned users from operating a system within the parameters you set for them, you intentionally limit the scope of abuse that can be enacted.\nDon\u2019t get me wrong: being able to escort as many users as you can to the happy path is a foundational skill. But we should also be having uncomfortable conversations about why something unthinkable may in fact not be. \nThey\u2019re not going to be fun conversations. It\u2019s not going to be easy convincing others that these aren\u2019t paranoid delusions best tucked out of sight in the darkest, dustiest corner of the backlog. Realistically, talking about it may even harm your career.\nBut consider the alternative. The controlled environment of the hypothetical allows us to explore these issues without propagating harm. Better to be viewed as the office\u2019s resident villain than to have to live with something like this:\n\nIf the past few years have taught us anything, it\u2019s that the choices we make\u2014or avoid making\u2014have consequences. Design has been doing a lot of growing up as of late, including waking up to the idea that technology isn\u2019t neutral. \nYou\u2019re going to have to start thinking the way a monster does\u2014if you can imagine it, chances are someone else can as well. To get into this kind of mindset, inverting the Inclusive Design Principles is a good place to start:\n\nProviding a comparable experience becomes forcing a single path.\nConsidering situation becomes ignoring circumstance.\nBeing consistent becomes acting capriciously.\nGiving control becomes removing autonomy. \nOffering choice becomes limiting options. \nPrioritizing content becomes obfuscating purpose.\nAdding value becomes filling with gibberish. \n\nCombined, these inverted principles start to paint a picture of something we\u2019re all familiar with: a half-baked, unscrupulous service that will jump at the chance to take advantage of you. This environment is also a perfect breeding ground for spawning bad actors.\nThese kinds of services limit you in the ways you can interact with them. They kick you out or lock you in if you don\u2019t meet their unnamed criteria. They force you to parse layout, prices, and policies that change without notification or justification. Their controls operate in ways that are unexpected and may shift throughout the experience. Their terms are dictated to you, gaslighting you to extract profit. Heaps of jargon and flashy, unnecessary features are showered on you to distract from larger structural and conceptual flaws.\nSo, how else can we go about preventing subverted design? Marli Mesibov, Content Strategist and Managing Editor of UX Booth, wrote a brilliant article about how to use Dark Patterns for good\u2014perhaps the most important takeaway being admitting you have a problem in the first place. \nAnother exercise is asking the question, \u201cWhat is the evil version of this feature?\u201d Ask it during the ideation phase. Ask it as part of acceptance criteria. Heck, ask it over lunch. I honestly don\u2019t care when, so long as the question is actually raised. \nIn keeping with the spirit of this article, we can also expand on this line of thinking. Author, scientist, feminist, and pacifist Ursula Franklin urges us to ask, \u201cWhose benefits? Whose risks?\u201d instead of \u201cWhat benefits? What risks?\u201d in her talk, When the Seven Deadly Sins Became the Seven Cardinal Virtues. Inspired by the talk, Ethan Marcotte discusses how this relates to the web platform in his powerful post, Seven into seven.\nFew things in this world are intrinsically altruistic or good\u2014it\u2019s just the nature of the beast. However, that doesn\u2019t mean we have to stand idly by when harm is created. If we can add terms like \u201canti-pattern\u201d to our professional vocabulary, we can certainly also incorporate phrases like \u201cabuser flow.\u201d \nDesign finally got a seat at the table. We should use this newfound privilege wisely. Listen to women. Listen to minorities, listen to immigrants, the unhoused, the less economically advantaged, and the less technologically-literate. Listen to the underrepresented and the underprivileged.\nSubverted design is a huge problem, likely one that will never completely go away. However, the more of us who put the hard work into being the villain, the more we can lessen the scope of its impact.", "year": "2018", "author": "Eric Bailey", "author_slug": "ericbailey", "published": "2018-12-06T00:00:00+00:00", "url": "https://24ways.org/2018/be-the-villain/", "topic": "ux"} {"rowid": 65, "title": "The Accessibility Mindset", "contents": "Accessibility is often characterized as additional work, hard to learn and only affecting a small number of people. Those myths have no logical foundation and often stem from outdated information or misconceptions.\nIndeed, it is an additional skill set to acquire, quite like learning new JavaScript frameworks, CSS layout techniques or new HTML elements. But it isn\u2019t particularly harder to learn than those other skills.\nA World Health Organization (WHO) report on disabilities states that,\n\n[i]ncluding children, over a billion people (or about 15% of the world\u2019s population) were estimated to be living with disability.\n\nBeing disabled is not as unusual as one might think. Due to chronic health conditions and older people having a higher risk of disability, we are also currently paving the cowpath to an internet that we can still use in the future.\nAccessibility has a very close relationship with usability, and advancements in accessibility often yield improvements in the usability of a website. Websites are also more adaptable to users\u2019 needs when they are built in an accessible fashion.\nBeyond the bare minimum\nIn the time of table layouts, web developers could create code that passed validation rules but didn\u2019t adhere to the underlying semantic HTML model. We later developed best practices, like using lists for navigation, and with HTML5 we started to wrap those lists in nav elements. Working with accessibility standards is similar. The Web Content Accessibility Guidelines (WCAG) 2.0 can inform your decision to make websites accessible and can be used to test that you met the success criteria. What it can\u2019t do is measure how well you met them. \nW3C developed a long list of techniques that can be used to make your website accessible, but you might find yourself in a situation where you need to adapt those techniques to be the most usable solution for your particular problem.\nThe checkbox below is implemented in an accessible way: The input element has an id and the label associated with the checkbox refers to the input using the for attribute. The hover area is shown with a yellow background and a black dotted border:\nOpen video\n \nThe label is clickable and the checkbox has an accessible description. Job done, right? Not really. Take a look at the space between the label and the checkbox:\nOpen video\n \nThe gutter is created using a right margin which pushes the label to the right. Users would certainly expect this space to be clickable as well. The simple solution is to wrap the label around the checkbox and the text:\nOpen video\n \nYou can also set the label to display:block; to further increase the clickable area:\nOpen video\n \nAnd while we\u2019re at it, users might expect the whole box to be clickable anyway. Let\u2019s apply the CSS that was on a wrapping div element to the label directly:\nOpen video\n \nThe result enhances the usability of your form element tremendously for people with lower dexterity, using a voice mouse, or using touch interfaces. And we only used basic HTML and CSS techniques; no JavaScript was added and not one extra line of CSS.\n
    \n \n
    \nButton Example\nThe button below looks like a typical edit button: a pencil icon on a real button element. But if you are using a screen reader or a braille keyboard, the button is just read as \u201cbutton\u201d without any indication of what this button is for.\nOpen video\n A screen reader announcing a button. Contains audio.\nThe code snippet shows why the button is not properly announced:\n\nAn icon font is used to display the icon and no text alternative is given. A possible solution to this problem is to use the title or aria-label attributes, which solves the alternative text use case for screen reader users:\nOpen video\n A screen reader announcing a button with a title.\nHowever, screen readers are not the only way people with and without disabilities interact with websites. For example, users can reset or change font families and sizes at will. This helps many users make websites easier to read, including people with dyslexia. Your icon font might be replaced by a font that doesn\u2019t include the glyphs that are icons. Additionally, the icon font may not load for users on slow connections, like on mobile phones inside trains, or because users decided to block external fonts altogether. The following screenshots show the mobile GitHub view with and without external fonts:\nThe mobile GitHub view with and without external fonts.\nEven if the title/aria-label approach was used, the lack of visual labels is a barrier for most people under those circumstances. One way to tackle this is using the old-fashioned img element with an appropriate alt attribute, but surprisingly not every browser displays the alternative text visually when the image doesn\u2019t load.\n\nProviding always visible text is an alternative that can work well if you have the space. It also helps users understand the meaning of the icons.\n\nThis also reads just fine in screen readers:\nOpen video\n A screen reader announcing the revised button.\nClever usability enhancements don\u2019t stop at a technical implementation level. Take the BBC iPlayer pages as an example: when a user navigates the \u201ccaptioned videos\u201d or \u201caudio description\u201d categories and clicks on one of the videos, captions or audio descriptions are automatically switched on. Small things like this enhance the usability and don\u2019t need a lot of engineering resources. It is more about connecting the usability dots for people with disabilities. Read more about the BBC iPlayer accessibility case study.\nMore information\nW3C has created several documents that make it easier to get the gist of what web accessibility is and how it can benefit everyone. You can find out \u201cHow People with Disabilities Use the Web\u201d, there are \u201cTips for Getting Started\u201d for developers, designers and content writers. And for the more seasoned developer there is a set of tutorials on web accessibility, including information on crafting accessible forms and how to use images in an accessible way.\nConclusion\nYou can only produce a web project with long-lasting accessibility if accessibility is not an afterthought. Your organization, your division, your team need to think about accessibility as something that is the foundation of your website or project. It needs to be at the same level as performance, code quality and design, and it needs the same attention. Users often don\u2019t notice when those fundamental aspects of good website design and development are done right. But they\u2019ll always know when they are implemented poorly.\nIf you take all this into consideration, you can create accessibility solutions based on the available data and bring accessibility to people who didn\u2019t know they\u2019d need it:\nOpen video\n \nIn this video from the latest Apple keynote, the Apple TV is operated by voice input through a remote. When the user asks \u201cWhat did she say?\u201d the video jumps back fifteen seconds and captions are switched on for a brief time. All three, the remote, voice input and captions have their roots in assisting people with disabilities. Now they benefit everyone.", "year": "2015", "author": "Eric Eggert", "author_slug": "ericeggert", "published": "2015-12-17T00:00:00+00:00", "url": "https://24ways.org/2015/the-accessibility-mindset/", "topic": "code"} {"rowid": 154, "title": "Diagnostic Styling", "contents": "We\u2019re all used to using CSS to make our designs live and breathe, but there\u2019s another way to use CSS: to find out where our markup might be choking on missing accessibility features, targetless links, and just plain missing content. \n\nNote: the techniques discussed here mostly work in Firefox, Safari, and Opera, but not Internet Explorer. I\u2019ll explain why that\u2019s not really a problem near the end of the article \u2014 and no, the reason is not \u201ceveryone should just ignore IE anyway\u201d.\n\nBasic Diagnostics\n\nTo pick a simple example, suppose you want to call out all holdover font and center elements in a site. Simple: you just add the following to your styles.\n\nfont, center {outline: 5px solid red;}\n\nYou could take it further and add in a nice lime background or some such, but big thick red outlines should suffice. Now you\u2019ll be able to see the offenders wherever as you move through the site. (Of course, if you do this on your public server, everyone else will see the outlines too. So this is probably best done on a development server or local copy of the site.)\n\nNot everyone may be familiar with outlines, which were introduced in CSS2, so a word on those before we move on. Outlines are much like borders, except outlines don\u2019t affect layout. Eh? Here\u2019s a comparison.\n\n\n\nOn the left, you have a border. On the right, an outline. The border takes up layout space, pushing other content around and generally being a nuisance. The outline, on the other hand, just draws into quietly into place. In most current browsers, it will overdraw any content already onscreen, and will be overdrawn by any content placed later \u2014 which is why it overlaps the images above it, and is overlapped by those below it.\n\nOkay, so we can outline deprecated elements like font and center. Is that all? Oh no.\n\nAttribution\n\nLet\u2019s suppose you also want to find any instances of inline style \u2014 that is, use of the style attribute on elements in the markup. This is generally discouraged (outside of HTML e-mails, which I\u2019m not going to get anywhere near), as it\u2019s just another side of the same coin of using font: baking the presentation into the document structure instead of putting it somewhere more manageable. So:\n\n*[style], font, center {outline: 5px solid red;}\n\nAdding that attribute selector to the rule\u2019s grouped selector means that we\u2019ll now be outlining any element with a style attribute.\n\nThere\u2019s a lot more that attribute selectors will let use diagnose. For example, we can highlight any images that have empty alt or title text.\n\nimg[alt=\"\"] {border: 3px dotted red;}\nimg[title=\"\"] {outline: 3px dotted fuchsia;}\n\nNow, you may wonder why one of these rules calls for a border, and the other for an outline. That\u2019s because I want them to \u201cadd together\u201d \u2014 that is, if I have an image which possesses both alt and title, and the values of both are empty, then I want it to be doubly marked.\n\n\n\nSee how the middle image there has both red and fuchsia dots running around it? (And am I the only one who sorely misses the actual circular dots drawn by IE5/Mac?) That\u2019s due to its markup, which we can see here in a fragment showing the whole table row.\n\n\nempty title\n\n\"\"\n\"comical\"\n\n\nRight, that\u2019s all well and good, but it misses a rather more serious situation: the selector img[alt=\"\"] won\u2019t match an img element that doesn\u2019t even have an alt attribute. How to tackle this problem?\n\nNot a Problem\n\nWell, if you want to select something based on a negative, you need a negative selector.\n\nimg:not([alt]) {border: 5px solid red;}\n\nThis is really quite a break from the rest of CSS selection, which is all positive: \u201cselect anything that has these characteristics\u201d. With :not(), we have the ability to say (in supporting browsers) \u201cselect anything that hasn\u2019t these characteristics\u201d. In the above example, only img elements that do not have an alt attribute will be selected. So we expand our list of image-related rules to read:\n\nimg[alt=\"\"] {border: 3px dotted red;}\nimg[title=\"\"] {outline: 3px dotted fuchsia;}\nimg:not([alt]) {border: 5px solid red;}\nimg:not([title]) {outline: 5px solid fuchsia;}\n\nWith the following results:\n\n\n\nWe could expand this general idea to pick up tables who lack a summary, or have an empty summary attribute.\n\ntable[summary=\"\"] {outline: 3px dotted red;}\ntable:not([summary]) {outline: 5px solid red;}\n\nWhen it comes to selecting header cells that lack the proper scope, however, we have a trickier situation. Finding headers with no scope attribute is easy enough, but what about those that have a scope attribute with an incorrect value?\n\nIn this case, we actually need to pull an on-off maneuver. This has us setting all th elements to have a highlight style, and then turn it off for the elements that meet our criteria.\n\nth {border: 2px solid red;}\nth[scope=\"col\"], th[scope=\"row\"] {border: none;}\n\nThis was necessary because of the way CSS selectors work. For example, consider this:\n\nth:not([scope=\"col\"]), th:not([scope=\"row\"]) {border: 2px solid red;}\n\nThat would select\u2026all th elements, regardless of their attrributes. That\u2019s because every th element doesn\u2019t have a scope of col, doesn\u2019t have a scope of row, or doesn\u2019t have either. There\u2019s no escaping this selector o\u2019 doom!\n\nThis limitation arises because :not() is limited to containing a single \u201cthing\u201d within its parentheses. You can\u2019t, for example, say \u201cselect all elements except those that are images which descend from list items\u201d. Reportedly, this limitation was imposed to make browser implementation of :not() easier.\n\nStill, we can make good use of :not() in the service of further diagnosing. Calling out links in trouble is a breeze:\n\na[href]:not([title]) {border: 5px solid red;}\na[title=\"\"] {outline: 3px dotted red;}\na[href=\"#\"] {background: lime;}\na[href=\"\"] {background: fuchsia;}\n\n\n\nHere we have a set that will call our attention to links missing title information, as well as links that have no valid target, whether through a missing URL or a JavaScript-driven page where there are no link fallbacks in the case of missing or disabled JavaScript (href=\"#\").\n\nAnd What About IE?\n\nAs I said at the beginning, much of what I covered here doesn\u2019t work in Internet Explorer, most particularly :not() and outline. (Oh, so basically everything? -Ed.)\n\nI can\u2019t do much about the latter. For the former, however, it\u2019s possible to hack your way around the problem by doing some layered on-off stuff. For example, for images, you replace the previously-shown rules with the following:\n\nimg {border: 5px solid red;}\nimg[alt][title] {border-width: 0;}\nimg[alt] {border-color: fuchsia;}\nimg[alt], img[title] {border-style: double;}\nimg[alt=\"\"][title],\nimg[alt][title=\"\"] {border-width: 3px;}\nimg[alt=\"\"][title=\"\"] {border-style: dotted;}\n\nIt won\u2019t have exactly the same set of effects, given the inability to use both borders and outlines, but will still highlight troublesome images.\n\n\n\nIt\u2019s also the case that IE6 and earlier lack support for even attribute selectors, whereas IE7 added pretty much all the attribute selector types there are, so the previous code block won\u2019t have any effect previous to IE7.\n\nIn a broader sense, though, these kinds of styles probably aren\u2019t going to be used in the wild, as it were. Diagnostic styles are something only you see as you work on a site, so you can make sure to use a browser that supports outlines and :not() when you\u2019re diagnosing. The fact that IE users won\u2019t see these styles is irrelevant since users of any browser probably won\u2019t be seeing these styles.\n\nPersonally, I always develop in Firefox anyway, thanks to its ability to become a full-featured IDE through the addition of extensions like Firebug and the Web Developer Toolbar.\n\n\nYeah, About That\u2026\n\nIt\u2019s true that much of what I describe in this article is available in the WDT. I feel there are two advantages to writing your own set of diagnostic styles.\n\n\n\tWhen you write your own styles, you can define exactly what the visual results will be, and how they will interact. The WDT doesn\u2019t let you make its outlines thicker or change their colors.\n\tYou can combine a bunch of diagnostics into a single set of rules and add it to your site\u2019s style sheet during the diagnostic portion, thus ensuring they persist as you surf around. This can be done in the WDT, but it isn\u2019t as easy (and, at least for me, not as reliable).\n\n\nIt\u2019s also true that a markup validator will catch many of the errors I mentioned, such as missing alt and summary attributes. For some, that\u2019s sufficient. But it won\u2019t catch everything diagnostic styles can, like empty alt values or untargeted links, which are perfectly valid, syntactically speaking.\n\n\nDiagnosis Complete?\n\nI hope this has been a fun look at the concept of diagnostic styling as well as a quick introduction into possibly new concepts like :not() and outlines. This isn\u2019t all there is to say, of course: there is plenty more that could be added to a diagnostic style sheet. And everyone\u2019s diagnostics will be different, tuned to meet each person\u2019s unique situation.\n\nMostly, though, I hope this small exploration triggers some creative thinking about the use of CSS to do more than just lay out pages and colorize text. Given the familiarity we acquire with CSS, it only makes sense to use it wherever it might be useful, and setting up visible diagnostic flags is just one more place for it to help us.", "year": "2007", "author": "Eric Meyer", "author_slug": "ericmeyer", "published": "2007-12-20T00:00:00+00:00", "url": "https://24ways.org/2007/diagnostic-styling/", "topic": "process"} {"rowid": 241, "title": "Jank-Free Image Loads", "contents": "There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then \u2014\nYour browser doesn\u2019t support HTML5 video. Here is\n a link to the video instead.\n\n\u2014 oops! \u2014 an image pops in above it, pushing said text down the page, and our poor reader loses their place.\nBy default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I\u2019ll chart a path into a jank-free future \u2013 one in which it\u2019s easy and natural to author elements that load like this:\nYour browser doesn\u2019t support HTML5 video. Here is\n a link to the video instead.\n\nJank is a very old problem, and there is a very old solution to it: the width and height attributes on . The idea is: if we stick an image\u2019s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives.\n\nwidth\nSpecifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network.\n\n\u2014The HTML 3.2 Specification, published on January 14 1997\nUnfortunately for us, when width and height were first spec\u2019d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird:\nSee the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen.\n\nwidth and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths.\nI have good news, bad news, and great news.\nThe good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them.\nThe bad news is that these techniques are all terrible, cumbersome hacks. They\u2019re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways.\nSo, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform.\naspect-ratio in CSS\nThe first proposed feature? An aspect-ratio property in CSS!\nThis would allow us to write CSS like this:\nimg {\n width: 100%;\n}\n\n.thumb {\n aspect-ratio: 1/1;\n}\n\n.hero {\n aspect-ratio: 16/9;\n}\nThis\u2019ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above.\nAlas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It\u2019s images \u2013 possibly user generated images \u2013 that can have any aspect ratio. The really tricky problem is unknown-when-you\u2019re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles:\n\nAnd inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style=\"\" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I\u2019m cutting off my, (or anyone else\u2019s) ability to change those aspect ratios, for whatever reason, later.\nHow might we specify aspect ratios at a lower level? How might we give browsers information about an image\u2019s dimensions, without giving them explicit instructions about how to style it?\nI\u2019ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio!\nA brief note on intrinsic and extrinsic sizing\nWhat do I mean by \u201cintrinsic\u201d and \u201cextrinsic?\u201d\nThe intrinsic size of an image is, put simply, how big it\u2019d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800\u00d7600 image has an intrinsic width of 800px.\nThe extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800\u00d7600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn\u2019t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px.\nIt surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity).\nCSS aspect-ratio lets us avoid setting extrinsic heights and widths \u2013 and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it.\nThe last tool I\u2019m going to talk about gets us out of the extrinsic sizing game all together \u2014 which, I think, is only appropriate for a feature that we\u2019re going to be using in HTML.\nintrinsicsize in HTML\nThe proposed intrinsicsize attribute will let you do this:\n\nThat tells the browser, \u201chey, this image.jpg that I\u2019m using here \u2013 I know you haven\u2019t loaded it yet but I\u2019m just going to let you know right away that it\u2019s going to have an intrinsic size of 800\u00d7600.\u201d This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image\u2019s intrinsic size.\nYou may ask (I did!): wait, what if my references multiple resources, which all have different intrinsic sizes? Well, if you\u2019re using srcset, intrinsicsize is a bit of a misnomer \u2013 what the attribute will do then, is specify an intrinsic aspect ratio:\n\nIn the future (and behind the \u201cExperimental Web Platform Features\u201d flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 \u2013 it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2.\nCan\u2019t wait\nI seem to have gotten myself into the weeds a bit. Sizing on the web is complicated!\nDon\u2019t let all of these details bury the big takeaway here: sometime soon (\ud83e\udd1e 2019\u203d \ud83e\udd1e), we\u2019ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can\u2019t wait!", "year": "2018", "author": "Eric Portis", "author_slug": "ericportis", "published": "2018-12-21T00:00:00+00:00", "url": "https://24ways.org/2018/jank-free-image-loads/", "topic": "code"} {"rowid": 87, "title": "Content Planning Demystified", "contents": "The first thing you learn as a junior editor is that you can\u2019t do everything yourself. You must rely on someone else to do at least part of what must be done: the long-range planning, the initial drafting or shooting or recording, the editing, the production, the final polish. All of those pieces of work that belong to someone else take quite a lot of time \u2014 days, weeks, sometimes months. If you\u2019re the sort of person who wrote college term papers the night before they were due, this can come as a bit of a shock. To my twenty-two-year-old self, it certainly did. \n\nIt turns out that the only real way to avoid a trainwreck with editorial work is to get ahead of the trouble, line everything up carefully, and leave oodles of room for all the pieces to connect on time. The same is true of content strategy, content planning, and just about everything to do with content on the web, except for the writing itself \u2014 and that, too, usually takes far longer than anyone expects. If you\u2019re not a professional editor and you suddenly find yourself dealing with content creation, you\u2019re almost certainly going to underestimate the time and effort involved, or to skip something important in the planning process that pops up to bite you later. \n\nWithout good content, it doesn\u2019t matter how well designed or coded your web project is, because it won\u2019t be doing the thing it\u2019s meant to do. And even if content is far from your specialty, you may well end up being the only one willing to coordinate it far enough in advance to avoid a chaotic ending. Whether you\u2019re hiring writers and editors for a big project, working with a small client, or coaxing some editorial help out of a co-worker, getting the planning work done correctly \u2014 and ahead of time \u2014 will allow you to orchestrate a glorious ballet of togetherness, instead of feverishly scraping together something to put on your site when the deadline looms. So get out the graph paper and the pocket protector, because we\u2019re going to go Full Nerd on this problem.\n\nKnow your poison\n\nAnyone who\u2019s seen a project delayed for six months by content trouble, or derailed by content that\u2019s bland and unhelpful, knows this stuff can make you feel like a dead sock. To get ahead of the problem, you\u2019re going to have to learn to spot common problems and plan your way around them. On web projects without a dedicated editorial lead, you\u2019re likely to encounter content that is:\n\n\n\tUseless \u2013 Content that doesn\u2019t serve your readers\u2019 needs in some way is pointless. And because it takes up your time and crowds out genuinely helpful things, it\u2019s actually damaging. The logic is simple: you can make content that\u2019s all about you, and that serves your stated messaging goals, but if no one is motivated to read it, it\u2019s a waste of everyone\u2019s time.\n\tBadly written \u2013 When you publish articles or instructions or other content that is too stiffly formal, overly wordy, hard to understand, offensive, unintentionally cheesy, or otherwise off in tone or style, you\u2019re doing two things. First, you\u2019re weakening the information you\u2019re trying to convey by making it obscure or annoying. Second \u2014 and this one is even more damaging \u2014 you\u2019re demonstrating bad taste. When you get the cultural elements of publishing wrong, you encourage your readers to believe that you either don\u2019t understand them or don\u2019t care about getting it wrong.\n\tGooey \u2013 Content strategists have been talking about structured content (that\u2019s chunks versus blobs) for years. If you\u2019re publishing more than a few dozen pages without thinking through the structure of your content, you\u2019re probably missing a chance to improve your long-term efficiency. If you\u2019re publishing more than a couple of thousand pages without taking care of your content structure, you\u2019re probably doing a lot more manual wrangling (or cumbersome CMS work) than you need to be, especially when it comes to cross-platform publishing.\n\tUnregulated \u2013 If you\u2019re not tracking what works and what doesn\u2019t \u2014 and especially if you don\u2019t know what \u201cworks\u201d means for your project or organization \u2014 you\u2019re almost certainly getting worse results than you should be, for more work.\n\tOverabundant \u2013 As demonstrated by the cinnamon challenge, too much of a delicious thing can be a giant and publicly embarrassing disaster. For most projects and organizations, if you\u2019re making more stuff than your readers can handle, or if you\u2019re spreading your creative and editorial resources too thinly, that\u2019s bad. Spammers, content farms, and barrel-bottom tabloids have their own special math, the side effects of which include insomnia, irritability, and crying in traffic while silently mouthing Wilson Phillips lyrics.\n\n\n\nPrevent all preventable damage\n\nOnce you know what kind of trouble to look for, you can prevent a lot of it by doing some smart planning well before someone starts writing (or recording or shooting video).\n\n\n\tTo prevent uselessness: Know your readers and decide what you\u2019re trying to accomplish \u2014 with your website as a whole, and with each piece of content, always. Once you know what you\u2019re trying to achieve, you can evaluate your work as you go to make sure that it\u2019s actually doing the right thing. (I\u2019ve written a lot more about this for A List Apart and in The Elements of Content Strategy.)\n\tTo prevent bad writing: Establish a consistent and appropriate style using examples (and a style guide if you need one), designate an editor, hire good writers, and make time for quality control. Kate Kiefer\u2019s style guide for MailChimp is a superb example of style-wrangling that everyone can use.\n\tTo prevent repulsive goo: Give your content as much structure as possible, and know how structure relates to your entire publishing ecosystem, including all those mobile devices. Sara Wachter-Boettcher\u2019s Content Everywhere and Karen McGrane\u2019s Content Strategy for Mobile offer brilliant yet friendly introductions to the wide world of structured content.\n\tTo prevent unregulated chaos: Measure everything that matters to your project, your client, your organization, and especially your readers \u2014 not generic measures of someone else\u2019s success. Measure it all regularly. Be disciplined. Adjust at regular intervals. Rick Allen\u2019s series on content strategy analytics is an excellent place to begin (part one; part two).\n\tTo prevent overabundance: Stop trying to do everything and focus on giving your readers just a few things they want and genuinely need. Don\u2019t establish a schedule your writers might not be able to keep, and focus on differentiating yourself with quality, not quantity. (And while you\u2019re at it, scratch the auto-posting to social networks and the cross-posting between them. It\u2019s about as engaging as an automated phone system.)\n\n\nAt a slightly higher level, pick the right content person (or team) for the work. If you really only need a few pages of copy, find a smart writer who does good work for multi-platform readers. If you\u2019re slinging tens of thousands of pages of content, get someone with field experience in high-level editorial planning and the ability to turn blobs into chunks and melted goo into Legos. If you\u2019re starting a project that involves making a lot of content over time, bring in someone with journalism experience (or get your client to do so). \n\n\u201cBut wait!\u201d you may say. \u201cI\u2019m not hiring anyone. I have to do this all myself.\u201d That\u2019s not uncommon at all. The bad news is, you have to learn a bunch of stuff. The good news is, you get to learn a bunch of awesome stuff. Figure out what the project needs, just as though you were going to hire someone, and then give yourself time to get up to speed. If it\u2019s a really complicated project, you\u2019re probably going to have trouble unless you eventually get professional help. But if it\u2019s small and you can do it in steps, you can certainly do much better by giving yourself a plan and working on the things that matter most.\n\n\nPlan for the marathon, not the sprint\n\nLaunching with awesome content is a tiny fraction of a victory, which is why it\u2019s so important that your content not be gooey or unregulated. It also means that if you don\u2019t plan for a realistic publication schedule, you are going to slam into reality in a really unpleasant way not too long after you\u2019ve begun. If you\u2019re asking people to make words (or videos or whatever) for you, they\u2019re going to have to do less of something else, so plan for that beforehand. \n\nAnd while you\u2019re at it, unless publishing is your core business, ditch the feed-the-beast plan that leads to fluffy blog posts and spiritless, unhelpful social media content. It\u2019s antisocial for your reading community, offers short-term gains at best, and will burn you out or lower your standards until you don\u2019t even know you\u2019re doing lousy work. Good content is expensive, no matter how you do it, but spreading yourself too thin is a much worse investment than doing a smaller thing well and gradually building up a body of superb content that people want to share and keep and return to.", "year": "2012", "author": "Erin Kissane", "author_slug": "erinkissane", "published": "2012-12-20T00:00:00+00:00", "url": "https://24ways.org/2012/content-planning-demystified/", "topic": "content"} {"rowid": 54, "title": "Putting My Patterns through Their Paces", "contents": "Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work.\nThe thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me.\nMeet the Teaser\nHere\u2019s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.)\nIn its simplest form, a teaser contains a headline, which links to an article:\n\nFairly straightforward, sure. But it\u2019s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types \u2013 some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile.\n\n Nearly every element visible on this page is built out of our generic \u201cteaser\u201d pattern.\n \nBut the teaser variation I\u2019d like to call out is the one that appears on The Toast\u2019s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline.\n\n The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts.\n \nAnd this is, as it happens, the teaser variation that gave me pause. Back in the old days \u2013 you know, like six months ago \u2013 I probably would\u2019ve marked this module up to match the design. In other words, I would\u2019ve looked at the module\u2019s visual hierarchy (metadata up top, headline and content below) and written the following HTML:\n
    \n \n 126 comments\n

    Article Title

    \n

    Lorem ipsum dolor sit amet, consectetur\u2026

    \n
    \nBut then I caught myself, and realized this wasn\u2019t the best approach.\nMoving Beyond Layout\nSince I\u2019ve started working responsively, there\u2019s a question I work into every step of my design process. Whether I\u2019m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself:\n\nWhat if someone doesn\u2019t browse the web like I do?\n\n\u2026Okay, that doesn\u2019t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it\u2019s been invaluable to so many aspects of my practice. If I\u2019m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I\u2019m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It\u2019s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web.\nAnd that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it\u2019d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they\u2019re associated.\nThat\u2019s why I find it\u2019s helpful to begin designing a pattern\u2019s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I\u2019m trying to support. In other words, if someone\u2019s encountering my design without the CSS I\u2019ve written, what should their experience be?\nSo I took a step back, and came up with a different approach:\n
    \n

    Article Title

    \n \n

    \n Lorem ipsum dolor sit amet, consectetur\u2026\n 126 comments\n

    \n
    \nMuch, much better. This felt like a better match for the content I was designing: the headline \u2013 easily most important element \u2013 was at the top, followed by the author\u2019s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that\u2019s why it\u2019s at the very end of the excerpt, the last element within our teaser. And with some light styling, we\u2019ve got a respectable-looking hierarchy in place:\n\nYeah, you\u2019re right \u2013 it\u2019s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we\u2019ll bolster the markup with an extra element around our title and byline:\n
    \n \n \u2026\n
    \nWith that in place, we can use flexbox to tweak our layout, like so:\n.teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\nflex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children.\n\nGetting closer! But as great as flexbox is, it doesn\u2019t do anything for elements outside our container, like our little comment count, which is, as you\u2019ve probably noticed, still stranded at the very bottom of our teaser.\nFlexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser\u2019s using a sufficiently modern version of flexbox. Here\u2019s the one we used:\nvar doc = document.body || document.documentElement;\nvar style = doc.style;\n\nif ( style.webkitFlexWrap == '' ||\n style.msFlexWrap == '' ||\n style.flexWrap == '' ) {\n doc.className += \" supports-flex\";\n}\nEagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser\u2019s design:\n.supports-flex .teaser-hed {\n display: flex;\n flex-direction: column-reverse;\n}\n.supports-flex .teaser .comment-count {\n position: absolute;\n right: 0;\n top: 1.1em;\n}\nIf the supports-flex class is present, we can apply our flexbox layout to the title area, sure \u2013 but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don\u2019t meet our threshold for our advanced styles are left with an attractive design that matches our HTML\u2019s content hierarchy; but the ones that pass our test receive the finished, final design.\n\nAnd with that, our teaser\u2019s complete.\nDiving Into Device-Agnostic Design\nThis is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I\u2019d recommend Heydon Pickering\u2019s \u201cFlexbox Grid Finesse\u201d, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you\u2019d be right! In fact, it\u2019s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I\u2019ve found that thinking about my design as existing in broad experience tiers \u2013 in layers \u2013 is one of the best ways of designing for the modern web. And what\u2019s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well.\nOpen video\n \n Even a simple search form can be conditionally enhanced, given a little layered thinking.\n \nThis more layered approach to interface design isn\u2019t a new one, mind you: it\u2019s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I\u2019ve found to practice responsive design. As Trent Walton once wrote,\n\nLike cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web\u2019s inherent variability.\n\nWe have a weird job, working on the web. We\u2019re designing for the latest mobile devices, sure, but we\u2019re increasingly aware that our definition of \u201csmartphone\u201d is much too narrow. Browsers have started appearing on our wrists and in our cars\u2019 dashboards, but much of the world\u2019s mobile data flows over sub-3G networks. After all, the web\u2019s evolution has never been charted along a straight line: it\u2019s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don\u2019t yet know about, a more device-agnostic, more layered design process can better prepare our patterns \u2013 and ourselves \u2013 for the future.\n(It won\u2019t help you get enough to eat at holiday parties, though.)", "year": "2015", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2015-12-10T00:00:00+00:00", "url": "https://24ways.org/2015/putting-my-patterns-through-their-paces/", "topic": "code"} {"rowid": 162, "title": "Conditional Love", "contents": "\u201cBrowser.\u201d The four-letter word of web design.\n\nI mean, let\u2019s face it: on the good days, when things just work in your target browsers, it\u2019s marvelous. The air smells sweeter, birds\u2019 songs sound more melodious, and both your design and your code are looking sharp.\n\nBut on the less-than-good days (which is, frankly, most of them), you\u2019re compelled to tie up all your browsers in a sack, heave them into the nearest river, and start designing all-imagemap websites. We all play favorites, after all: some will swear by Firefox, Opera fans are allegedly legion, and others still will frown upon anything less than the latest WebKit nightly.\n\nThankfully, we do have an out for those little inconsistencies that crop up when dealing with cross-browser testing: CSS patches.\n\nSpare the Rod, Hack the Browser\n\nBefore committing browsercide over some rendering bug, a designer will typically reach for a snippet of CSS fix the faulty browser. Historically referred to as \u201chacks,\u201d I prefer Dan Cederholm\u2019s more client-friendly alternative, \u201cpatches\u201d.\n\nBut whatever you call them, CSS patches all work along the same principle: supply the proper property value to the good browsers, while giving higher maintenance other browsers an incorrect value that their frustrating idiosyncratic rendering engine can understand.\n\nTraditionally, this has been done either by exploiting incomplete CSS support:\n\n#content {\n\theight: 1%;\t // Let's force hasLayout for old versions of IE.\n\tline-height: 1.6;\n\tpadding: 1em;\n}\nhtml>body #content {\n\theight: auto; // Modern browsers get a proper height value.\n}\n\nor by exploiting bugs in their rendering engine to deliver alternate style rules:\n\n#content p {\n\tfont-size: .8em;\n\t/* Hide from Mac IE5 \\*/\n\tfont-size: .9em;\n\t/* End hiding from Mac IE5 */\n}\n\nWe\u2019ve even used these exploits to serve up whole stylesheets altogether:\n\n@import url(\"core.css\");\n@media tty {\n\ti{content:\"\\\";/*\" \"*/}} @import 'windows-ie5.css'; /*\";}\n}/* */\n\nThe list goes on, and on, and on. For every browser, for every bug, there\u2019s a patch available to fix some rendering bug.\n\nBut after some time working with standards-based layouts, I\u2019ve found that CSS patches, as we\u2019ve traditionally used them, become increasingly difficult to maintain. As stylesheets are modified over the course of a site\u2019s lifetime, inline fixes we\u2019ve written may become obsolete, making them difficult to find, update, or prune out of our CSS. A good patch requires a constant gardener to ensure that it adds more than just bloat to a stylesheet, and inline patches can be very hard to weed out of a decently sized CSS file.\n\nGiving the Kids Separate Rooms\n\nSince I joined Airbag Industries earlier this year, every project we\u2019ve worked on has this in the head of its templates:\n\n\n\n\n\nThe first element is, simply enough, a link element that points to the project\u2019s main CSS file. No patches, no hacks: just pure, modern browser-friendly style rules. Which, nine times out of ten, will net you a design that looks like spilled eggnog in various versions of Internet Explorer.\n\nBut don\u2019t reach for the mulled wine quite yet. Immediately after, we\u2019ve got a brace of conditional comments wrapped around two other link elements. These odd-looking comments allow us to selectively serve up additional stylesheets just to the version of IE that needs them. We\u2019ve got one for IE 6 and below:\n\n\n\nAnd another for IE7 and above:\n\n\n\nMicrosoft\u2019s conditional comments aren\u2019t exactly new, but they can be a valuable alternative to cooking CSS patches directly into a master stylesheet. And though they\u2019re not a W3C-approved markup structure, I think they\u2019re just brilliant because they innovate within the spec: non-IE devices will assume that the comments are just that, and ignore the markup altogether.\n\nThis does, of course, mean that there\u2019s a little extra markup in the head of our documents. But this approach can seriously cut down on the unnecessary patches served up to the browsers that don\u2019t need them. Namely, we no longer have to write rules like this in our main stylesheet:\n\n#content {\n\theight: 1%;\t// Let's force hasLayout for old versions of IE.\n\tline-height: 1.6;\n\tpadding: 1em;\n}\nhtml>body #content {\n\theight: auto;\t// Modern browsers get a proper height value.\n}\n\nRather, we can simply write an un-patched rule in our core stylesheet:\n\n#content {\n\tline-height: 1.6;\n\tpadding: 1em;\n}\n\nAnd now, our patch for older versions of IE goes in\u2014you guessed it\u2014the stylesheet for older versions of IE:\n\n#content {\n\theight: 1%;\n}\n\nThe hasLayout patch is applied, our design\u2019s repaired, and\u2014most importantly\u2014the patch is only seen by the browser that needs it. The \u201cgood\u201d browsers don\u2019t have to incur any added stylesheet weight from our IE patches, and Internet Explorer gets the conditional love it deserves.\n\nMost importantly, this \u201ccompartmentalized\u201d approach to CSS patching makes it much easier for me to patch and maintain the fixes applied to a particular browser. If I need to track down a bug for IE7, I don\u2019t need to scroll through dozens or hundreds of rules in my core stylesheet: instead, I just open the considerably slimmer IE7-specific patch file, make my edits, and move right along.\n\nEven Good Children Misbehave\n\nWhile IE may occupy the bulk of our debugging time, there\u2019s no denying that other popular, modern browsers will occasionally disagree on how certain bits of CSS should be rendered. But without something as, well, pimp as conditional comments at our disposal, how do we bring the so-called \u201cgood browsers\u201d back in line with our design?\n\nAssuming you\u2019re loving the \u201cone patch file per browser\u201d model as much as I do, there\u2019s just one alternative: JavaScript.\n\nfunction isSaf() {\n\tvar isSaf = (document.childNodes && !document.all && !navigator.taintEnabled && !navigator.accentColorName) ? true : false;\n\treturn isSaf;\n}\nfunction isOp() {\n\tvar isOp = (window.opera) ? true : false;\n\treturn isOp;\n}\n\nInstead of relying on dotcom-era tactics of parsing the browser\u2019s user-agent string, we\u2019re testing here for support for various DOM objects, whose presence or absence we can use to reasonably infer the browser we\u2019re looking at. So running the isOp() function, for example, will test for Opera\u2019s proprietary window.opera object, and thereby accurately tell you if your user\u2019s running Norway\u2019s finest browser.\n\nWith scripts such as isOp() and isSaf() in place, you can then reasonably test which browser\u2019s viewing your content, and insert additional link elements as needed.\n\nfunction loadPatches(dir) {\n\tif (document.getElementsByTagName() && document.createElement()) {\n\t\tvar head = document.getElementsByTagName(\"head\")[0];\n\t\tif (head) {\n\t\t\tvar css = new Array();\n\t\t\tif (isSaf()) {\n\t\t\t\tcss.push(\"saf.css\");\n\t\t\t} else if (isOp()) {\n\t\t\t\tcss.push(\"opera.css\");\n\t\t\t}\n\t\t\tif (css.length) {\n\t\t\t\tvar link = document.createElement(\"link\");\n\t\t\t\tlink.setAttribute(\"rel\", \"stylesheet\");\n\t\t\t\tlink.setAttribute(\"type\", \"text/css\");\n\t\t\t\tlink.setAttribute(\"media\", \"screen, projection\");\n\t\t\t\tfor (var i = 0; i < css.length; i++) {\n\t\t\t\t\tvar tag = link.cloneNode(true);\n\t\t\t\t\ttag.setAttribute(\"href\", dir + css[0]);\n\t\t\t\t\thead.appendChild(tag);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\nHere, we\u2019re testing the results of isSaf() and isOp(), one after the other. For each function that returns true, then the name of a new stylesheet is added to the oh-so-cleverly named css array. Then, for each entry in css, we create a new link element, point it at our patch file, and insert it into the head of our template.\n\nFire it up using your favorite onload or DOMContentLoaded function, and you\u2019re good to go.\n\nScripteat Emptor\n\nAt this point, some of the audience\u2019s more conscientious \u2018scripters may be preparing to lob figgy pudding at this author\u2019s head. And that\u2019s perfectly understandable; relying on JavaScript to patch CSS chafes a bit against the normally clean separation we have between our pages\u2019 content, presentation, and behavior layers.\n\nAnd beyond the philosophical concerns, this approach comes with a few technical caveats attached:\n\nBrowser detection? So un-133t.\n\nBrowser detection is not something I\u2019d typically recommend. Whenever possible, a proper DOM script should check for the support of a given object or method, rather than the device with which your users view your content.\n\nIt\u2019s JavaScript, so don\u2019t count on it being available.\n\nAccording to one site, roughly four percent of Internet users don\u2019t have JavaScript enabled. Your site\u2019s stats might be higher or lower than this number, but still: don\u2019t expect that every member of your audience will see these additional stylesheets, and ensure that your content\u2019s still accessible with JS turned off.\n\nBe a constant gardener.\n\nThe sample isSaf() and isOp() functions I\u2019ve written will tell you if the user\u2019s browser is Safari or Opera. As a result, stylesheets written to patch issues in an old browser may break when later releases repair the relevant CSS bugs.\n\nYou can, of course, add logic to these simple little scripts to serve up version-specific stylesheets, but that way madness may lie. In any event, test your work vigorously, and keep testing it when new versions of the targeted browsers come out. Make sure that a patch written today doesn\u2019t become a bug tomorrow.\n\nPatching Firefox, Opera, and Safari isn\u2019t something I\u2019ve had to do frequently: still, there have been occasions where the above script\u2019s come in handy. Between conditional comments, careful CSS auditing, and some judicious JavaScript, browser-based bugs can be handled with near-surgical precision.\n\nSo pass the \u2018nog. It\u2019s patchin\u2019 time.", "year": "2007", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2007-12-15T00:00:00+00:00", "url": "https://24ways.org/2007/conditional-love/", "topic": "code"} {"rowid": 238, "title": "Everything You Wanted To Know About Gradients (And a Few Things You Didn\u2019t)", "contents": "Hello. I am here to discuss CSS3 gradients. Because, let\u2019s face it, what the web really needed was more gradients.\n\nStill, despite their widespread use (or is it overuse?), the smartly applied gradient can be a valuable contributor to a designer\u2019s vocabulary. There\u2019s always been a tension between the inherently two-dimensional nature of our medium, and our desire for more intensity, more depth in our designs. And a gradient can evoke so much: the splay of light across your desk, the slow decrease in volume toward the end of your favorite song, the sunset after a long day. When properly applied, graded colors bring a much needed softness to our work.\n\nOf course, that whole \u2018proper application\u2019 thing is the tricky bit.\n\nBut given their place in our toolkit and their prominence online, it really is heartening to see we can create gradients directly with CSS. They\u2019re part of the draft images module, and implemented in two of the major rendering engines.\n\nStill, I\u2019ve always found CSS gradients to be one of the more confusing aspects of CSS3. So if you\u2019ll indulge me, let\u2019s take a quick look at how to create CSS gradients\u2014hopefully we can make them seem a bit more accessible, and bring a bit more art into the browser.\n\nGradient theory 101 (I hope that\u2019s not really a thing)\n\nRight. So before we dive into the code, let\u2019s cover a few basics. Every gradient, no matter how complex, shares a few common characteristics. Here\u2019s a straightforward one:\n\n I spent seconds hours designing this gradient. I hope you like it.\n\nAt either end of our image, we have a final color value, or color stop: on the left, our stop is white; on the right, black. And more color-rich gradients are no different:\n\n (Don\u2019t ever really do this. Please. I beg you.)\n\nIt\u2019s visually more intricate, sure. But at the heart of it, we have just seven color stops (red, orange, yellow, and so on), making for a fantastic gradient all the way.\n\nNow, color stops alone do not a gradient make. Between each is a transition point, the fail-over point between the two stops. Now, the transition point doesn\u2019t need to fall exactly between stops: it can be brought closer to one stop or the other, influencing the overall shape of the gradient.\n\nA tale of two syntaxes\n\nArmed with our new vocabulary, let\u2019s look at a CSS gradient in the wild. Behold, the simple input button:\n\n\n\nThere\u2019s a simple linear gradient applied vertically across the button, moving from a bright sunflowerish hue (#FAA51A, for you hex nuts in the audience) to a much richer orange (#F47A20). And here\u2019s the CSS that makes it happen:\n\ninput[type=submit] {\n\tbackground-color: #F47A20;\n\tbackground-image: -moz-linear-gradient(\n\t\t#FAA51A,\n\t\t#F47A20\n\t\t);\n\tbackground-image: -webkit-gradient(linear, 0 0, 0 100%,\n\t\tcolor-stop(0, #FAA51A),\n\t\tcolor-stop(1, #F47A20)\n\t\t);\n}\n\nI\u2019ve borrowed David DeSandro\u2019s most excellent formatting suggestions for gradients to make this snippet a bit more legible but, still, the code above might have turned your stomach a bit. And that\u2019s perfectly understandable\u2014heck, it sort of turned mine. But let\u2019s step through the CSS slowly, and see if we can\u2019t make it a little less terrifying.\n\nVerbose WebKit is verbose\n\nHere\u2019s the syntax for our little gradient on WebKit:\n\nbackground-image: -webkit-gradient(linear, 0 0, 0 100%,\n\tcolor-stop(0, #FAA51A),\n\tcolor-stop(1, #F47A20)\n\t);\n\nWoof. Quite a mouthful, no? Well, here\u2019s what we\u2019re looking at:\n\n\n\tWebKit has a single -webkit-gradient property, which can be used to create either linear or radial gradients.\n\tThe next two values are the starting and ending positions for our gradient (0 0 and 0 100%, respectively). Linear gradients are simply drawn along the path between those two points, which allows us to change the direction of our gradient simply by altering its start and end points.\n\tAfterward, we specify our color stops with the oh-so-aptly named color-stop parameter, which takes the stop\u2019s position on the gradient (0 being the beginning, and 100% or 1 being the end) and the color itself.\n\n\nFor a simple two-color gradient like this, -webkit-gradient has a bit of shorthand notation to offer us:\n\nbackground-image: -webkit-gradient(linear, 0 0, 0 100%,\n\tfrom(#FAA51A),\n\tto(#FAA51A)\n\t);\n\nfrom(#FAA51A) is equivalent to writing color-stop(0, #FAA51A), and to(#FAA51A) is the same as color-stop(1, #FAA51A) or color-stop(100%, #FAA51A)\u2014in both cases, we\u2019re simply declaring the first and last color stops in our gradient.\n\nTerse Gecko is terse\n\nWebKit proposed its syntax back in 2008, heavily inspired by the way gradients are drawn in the canvas specification. However, a different, leaner syntax came to the fore, eventually appearing in a draft module specification in CSS3.\n\nNaturally, because nothing on the web was meant to be easy, this is the one that Mozilla has implemented.\n\nHere\u2019s how we get gradient-y in Gecko:\n\nbackground-image: -moz-linear-gradient(\n\t#FAA51A,\n\t#F47A20\n\t);\n\nWait, what? Done already? That\u2019s right. By default, -moz-linear-gradient assumes you\u2019re trying to create a vertical gradient, starting from the top of your element and moving to the bottom. And, if that\u2019s the case, then you simply need to specify your color stops, delimited with a few commas.\n\nI know: that was almost\u2026 painless. But the W3C/Mozilla syntax also affords us a fair amount of flexibility and control, by introducing features as we need them.\n\nWe can specify an origin point for our gradient:\n\nbackground-image: -moz-linear-gradient(50% 100%,\n\t#FAA51A,\n\t#F47A20\n\t);\n\nAs well as an angle, to give it a direction:\n\nbackground-image: -moz-linear-gradient(50% 100%, 45deg,\n\t#FAA51A,\n\t#F47A20\n\t);\n\nAnd we can specify multiple stops, simply by adding to our comma-delimited list:\n\nbackground-image: -moz-linear-gradient(50% 100%, 45deg,\n\t#FAA51A,\n\t#FCC,\n\t#F47A20\n\t);\n\nBy adding a percentage after a given color value, we can determine its position along the gradient path:\n\nbackground-image: -moz-linear-gradient(50% 100%, 45deg,\n\t#FAA51A,\n\t#FCC 20%,\n\t#F47A20\n\t);\n\nSo that\u2019s some of the flexibility implicit in the W3C/Mozilla-style syntax.\n\nNow, I should note that both syntaxes have their respective fans. I will say that the W3C/Mozilla-style syntax makes much more sense to me, and lines up with how I think about creating gradients. But I can totally understand why some might prefer WebKit\u2019s more verbose approach to the, well, looseness behind the -moz syntax. \u00c0 chacun son gradient syntax.\n\nStill, as the language gets refined by the W3C, I really hope some consensus is reached by the browser vendors. And with Opera signaling that it will support the W3C syntax, I suppose it falls on WebKit to do the same.\n\nReusing color stops for fun and profit\n\nBut CSS gradients aren\u2019t all simple colors and shapes and whatnot: by getting inventive with individual color stops, you can create some really complex, compelling effects.\n\nTim Van Damme, whose brain, I believe, should be posthumously donated to science, has a particularly clever application of gradients on The Box, a site dedicated to his occasional podcast series. Now, there are a fair number of gradients applied throughout the UI, but it\u2019s the feature image that really catches the eye.\n\nYou see, there\u2019s nothing that says you can\u2019t reuse color stops. And Tim\u2019s exploited that perfectly.\n\nHe\u2019s created a linear gradient, angled at forty-five degrees from the top left corner of the photo, starting with a fully transparent white (rgba(255, 255, 255, 0)). At the halfway mark, he\u2019s established another color stop at an only slightly more opaque white (rgba(255, 255, 255, 0.1)), making for that incredibly gradual brightening toward the middle of the photo.\n\n\n\nBut then he has set another color stop immediately on top of it, bringing it back down to rgba(255, 255, 255, 0) again. This creates that fantastically hard edge that diagonally bisects the photo, giving the image that subtle gloss.\n\n\n\nAnd his final color stop ends at the same fully transparent white, completing the effect. Hot? I do believe so.\n\nRocking the radials\n\nWe\u2019ve been looking at linear gradients pretty exclusively. But I\u2019d be remiss if I didn\u2019t at least mention radial gradients as a viable option, including a modest one as a link accent on a navigation bar:\n\n\n\nAnd here\u2019s the relevant CSS:\n\nbackground: -moz-radial-gradient(50% 100%, farthest-side,\n\trgb(204, 255, 255) 1%,\n\trgb(85, 85, 85) 15%,\n\trgba(85, 85, 85, 0)\n\t);\nbackground: -webkit-gradient(radial, 50% 100%, 0, 50% 100%, 15,\n\tfrom(rgb(204, 255, 255)),\n\tto(rgba(85, 85, 85, 0))\n\t);\n\nNow, the syntax builds on what we\u2019ve already learned about linear gradients, so much of it might be familiar to you, picking out color stops and transition points, as well as the two syntaxes\u2019 reliance on either a separate property (-moz-radial-gradient) or parameter (-webkit-gradient(radial, \u2026)) to shift into circular mode.\n\nMozilla introduces another stand-alone property (-moz-radial-gradient), and accepts a starting point (50% 100%) from which the circle radiates. There\u2019s also a size constant defined (farthest-side), which determines the reach and shape of our gradient.\n\nWebKit is again the more verbose of the two syntaxes, requiring both starting and ending points (50% 100% in both cases). Each also accepts a radius in pixels, allowing you to control the skew and breadth of the circle.\n\nAgain, this is a fairly modest little radial gradient. Time and article length (and, let\u2019s be honest, your author\u2019s completely inadequate grasp of geometry) prevent me from covering radial gradients in much more detail, because they are incredibly powerful. For those interested in learning more, I can\u2019t recommend the references at Mozilla and Apple strongly enough.\n\nLeave no browser behind\n\nBut no matter the kind of gradients you\u2019re working with, there is a large swathe of browsers that simply don\u2019t support gradients. Thankfully, it\u2019s fairly easy to declare a sensible fallback\u2014it just depends on the kind of fallback you\u2019d like. Essentially, gradient-blind browsers will disregard any properties containing references to either -moz-linear-gradient, -moz-radial-gradient, or -webkit-gradient, so you simply need to keep your fallback isolated from those properties.\n\nFor example: if you\u2019d like to fall back to a flat color, simply declare a separate background-color:\n\n.nav {\n\tbackground-color: #000;\n\tbackground-image: -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));\n\tbackground-image: -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));\n}\n\nOr perhaps just create three separate background properties.\n\n.nav {\n\tbackground: #000;\n\tbackground: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));\n\tbackground: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));\n}\n\nWe can even build on this to fall back to a non-gradient image:\n\n.nav {\n\tbackground: #000 url(\"faux-gradient-lol.png\") repeat-x;\n\tbackground: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));\n\tbackground: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));\n}\n\nNo matter the approach you feel most appropriate to your design, it\u2019s really just a matter of keeping your fallback design quarantined from its CSS3-ified siblings.\n\n(If you\u2019re feeling especially masochistic, there\u2019s even a way to get simple linear gradients working in IE via Microsoft\u2019s proprietary filters. Of course, those come with considerable performance penalties that even Microsoft is quick to point out, so I\u2019d recommend avoiding those.\n\nAnd don\u2019t tell Andy Clarke I told you, or he\u2019ll probably unload his Derringer at me. Or something.)\n\nGo forth and, um, gradientify!\n\nIt\u2019s entirely possible your head\u2019s spinning. Heck, mine is, but that might be the effects of the \u2019nog. But maybe you\u2019re wondering why you should care about CSS gradients. After all, images are here right now, and work just fine. \n\nWell, there are some quick benefits that spring to mind: fewer HTTP requests are needed; CSS3 gradients are easily made scalable, making them ideal for variable widths and heights; and finally, they\u2019re easily modifiable by tweaking a few CSS properties. Because, let\u2019s face it, less time spent yelling at Photoshop is a very, very good thing.\n\nOf course, CSS-generated gradients are not without their drawbacks. The syntax can be confusing, and it\u2019s still under development at the W3C. As we\u2019ve seen, browser support is still very much in flux. And it\u2019s possible that gradients themselves have some real performance drawbacks\u2014so test thoroughly, and gradient carefully.\n\nBut still, as syntaxes converge, and support improves, I think generated gradients can make a compelling tool in our collective belts. The tasteful design is, of course, entirely up to you.\n\nSo have fun, and get gradientin\u2019.", "year": "2010", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2010-12-22T00:00:00+00:00", "url": "https://24ways.org/2010/everything-you-wanted-to-know-about-gradients/", "topic": "code"} {"rowid": 313, "title": "Centered Tabs with CSS", "contents": "Doug Bowman\u2019s Sliding Doors is pretty much the de facto way to build tabbed navigation with CSS, and rightfully so \u2013 it is, as they say, rockin\u2019 like Dokken. But since it relies heavily on floats for the positioning of its tabs, we\u2019re constrained to either left- or right-hand navigation. But what if we need a bit more flexibility? What if we need to place our navigation in the center?\n\nStyling the li as a floated block does give us a great deal of control over margin, padding, and other presentational styles. However, we should learn to love the inline box \u2013 with it, we can create a flexible, centered alternative to floated navigation lists.\n\nHumble Beginnings\n\nDo an extra shot of \u2018nog, because you know what\u2019s coming next. That\u2019s right, a simple unordered list:\n\n\n\nIf we were wedded to using floats to style our list, we could easily fix the width of our ul, and trick it out with some margin: 0 auto; love to center it accordingly. But this wouldn\u2019t net us much flexibility: if we ever changed the number of navigation items, or if the user increased her browser\u2019s font size, our design could easily break.\n\nInstead of worrying about floats, let\u2019s take the most basic approach possible: let\u2019s turn our list items into inline elements, and simply use text-align to center them within the ul:\n\n#navigation ul, #navigation ul li {\n list-style: none;\n margin: 0;\n padding: 0;\n}\n\n#navigation ul {\n text-align: center;\n}\n\n#navigation ul li {\n display: inline;\n margin-right: .75em;\n}\n\n#navigation ul li.last {\n margin-right: 0;\n}\n\nOur first step is sexy, no? Well, okay, not really \u2013 but it gives us a good starting point. We\u2019ve tamed our list by removing its default styles, set the list items to display: inline, and centered the lot. Adding a background color to the links shows us exactly how the different elements are positioned.\n\nNow the fun stuff.\n\nInline Elements, Padding, and You\n\nSo how do we give our links some dimensions? Well, as the CSS specification tells us, the height property isn\u2019t an option for inline elements such as our anchors. However, what if we add some padding to them?\n\n#navigation li a {\n padding: 5px 1em;\n}\n\nI just love leading questions. Things are looking good, but something\u2019s amiss: as you can see, the padded anchors seem to be escaping their containing list.\n\nThankfully, it\u2019s easy to get things back in line. Our anchors have 5 pixels of padding on their top and bottom edges, right? Well, by applying the same vertical padding to the list, our list will finally \u201ccontain\u201d its child elements once again.\n\n\u2019Tis the Season for Tabbing\n\nNow, we\u2019re finally able to follow the \u201cSliding Doors\u201d model, and tack on some graphics:\n\n#navigation ul li a {\n background: url(\"tab-right.gif\") no-repeat 100% 0;\n color: #06C;\n padding: 5px 0;\n text-decoration: none;\n}\n\n#navigation ul li a span {\n background: url(\"tab-left.gif\") no-repeat;\n padding: 5px 1em;\n}\n\n#navigation ul li a:hover span {\n color: #69C;\n text-decoration: underline;\n}\n\nFinally, our navigation\u2019s looking appropriately sexy. By placing an equal amount of padding on the top and bottom of the ul, our tabs are properly \u201ccontained\u201d, and we can subsequently style the links within them.\n\n\n\nBut what if we want them to bleed over the bottom-most border? Easy: we can simply decrease the bottom padding on the list by one pixel, like so.\n\nA Special Note for Special Browsers\n\nThe Mac IE5 users in the audience are likely hopping up and down by now: as they\u2019ve discovered, our centered navigation behaves rather annoyingly in their browser. As Philippe Wittenbergh has reported, Mac IE5 is known to create \u201cphantom links\u201d in a block-level element when text-align is set to any value but the default value of left. Thankfully, Philippe has documented a workaround that gets that [censored] venerable browser to behave. Simply place the following code into your CSS, and the links will be restored to their appropriate width:\n\n/**//*/\n#navigation ul li a {\n display: inline-block;\n white-space: nowrap;\n width: 1px;\n}\n/**/\n\nIE for Windows, however, displays an extra kind of crazy. The padding I\u2019ve placed on my anchors is offsetting the spans that contain the left curve of my tabs; thankfully, these shenanigans are easily straightened out:\n\n/**/\n* html #navigation ul li a {\n padding: 0;\n}\n/**/\n\nAnd with that, we\u2019re finally finished.\n\nAll set.\n\nAnd that\u2019s it. With your centered navigation in hand, you can finally enjoy those holiday toddies and uncomfortable conversations with your skeevy Uncle Eustace.", "year": "2005", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2005-12-08T00:00:00+00:00", "url": "https://24ways.org/2005/centered-tabs-with-css/", "topic": "code"} {"rowid": 226, "title": "Documentation-Driven Design for APIs", "contents": "Documentation is like gift wrapping. It seems like superfluous fluff, but your family tends to be rather disappointed when their presents arrive in supermarket carrier bags, so you have to feign some sort of attempt at making your gift look enticing. Documentation doesn\u2019t have to be all hard work and sellotaping yourself to a table \u2013 you can make it useful and relevant.\n\nDocumentation gets a pretty rough deal. It tends to get left until the end of a project, when some poor developer is assigned the \u2018document project\u2019 ticket and wades through each feature of Whizzy New API 3.0 and needs to recall exactly what each method is meant to do. That\u2019s assuming any time is left for documentation at all. The more common outcome resembles last minute homework scribbled on a post-it note, where just the bare bones of what\u2019s available are put out for your users, and you hope that you\u2019ll spot the inconsistencies and mistakes before they do.\n\nWouldn\u2019t it be nicer for everyone if you could make documentation not only outstanding for your users, but also a valuable tool for your development team \u2013 so much so that you couldn\u2019t imagine writing a line of code before you\u2019d documented it?\n\nDocumentation needs to have three main features:\n\n\n\tIt should have total coverage and document all the features of your project. Private methods should be documented for your developers, and public features need to be available to your users.\n\tIt should be consistent \u2013 a user should know what to expect from your documentation, and terminology should be accurate to your language.\n\tIt should be current \u2013 and that means staying accurate as new versions of your code base are released.\n\n\nBut you can also get these bonuses:\n\n\n\tAct as a suggested specification \u2013 a guide that will aid a developer in making something consistent and usable.\n\tIt can test your API quality.\n\tIt can enhance the communication skills within your development team.\n\n\nSo how do we get our documentation to be rich and full of features, instead of a little worn out like Boxing Day leftovers?\n\nWrite your documentation first\n\nWhen I say first, I mean first. Not after you\u2019ve started writing the code. Not even after you\u2019ve started writing your unit tests. First. You may or may not have been provided with a decent specification, but the first job should be to turn your requirements for a feature into documentation. \n\nIt works best when it takes the form of in-code comments. It works even better when your in-code comments take a standard documentation format that you can later use to generate published documentation for your users. This has the benefit of immediately making your docs as version controlled as your code-base, and it saves having to rewrite, copy or otherwise harass your docs into something legible later on. \n\nAlmost all languages have a self-documentation format these days. My choice of format for JavaScript is JSDocToolkit, and the sort of things I look for are the ability to specify private and public methods, full options object statements (opts as Opts only is a no-no), and the ability to include good examples.\n\nSo, our example for today will be a new festive feature for a JavaScript API. We\u2019ve been asked to specify a sled for Santa to get around the world to give out toys:\n\n\n\tSanta needs to be able to travel around the world in one night to deliver toys to children, and he\u2019ll need some reindeer to pull his sled.\n\n\nAs documentation, it would look like:\n\n/**\n@name Sled\n@extends Vehicle\n@constructor\n@description Create a new sled to send Santa around the world to deliver toys to good kids.\n\t@param {Object} [opts] Options\n\t@param {number} [opts.capacity='50'] Set the capacity of the sled\n\t@param {string} [opts.pilot='santa'] The pilot of the sled.\n@example\n\t// Create a sled and specify some reindeer.\n\tnew Sled().reindeer(['Dasher', 'Dancer', 'Prancer', 'Vixen', 'Comet', 'Cupid']);\n*/\n\nBy breaking it down as documentation, you can, for example, hand this over to another developer without the need to explain the feature in much depth, and they\u2019ll develop something that has to match this piece of documentation. It specifies everything that is important to this feature \u2013 its default values and types, and where it inherits other features from. \n\nWe know that we need to specify some way of setting reindeer to pull the sled and also some toys to give, and so we can quickly specify extra methods for the sled:\n\n/*\n@name vehicle.Sled#reindeer\n@function\n@description Set the reindeer that will pull Santa's sled.\n\t@param {string[]} reindeer A list of the reindeer.\n@example\n\t// specifying some reindeer\n\tSled().reindeer(['Dasher', 'Dancer', 'Rudolph', 'Vixen']);\n*/\n/*\n@name vehicle.Sled#toys\n@function\n@description Add a list of toys and recipients to the Sled.\n\t@param {Object[]} toys A list of toys and who will receive them.\n@example\n\t// Adding toys to the sled\n\tSled().toys([\n\t\t{name:'Brian', toy:'Fire Engine'},\n\t\t{name:'Drew', toy:'Roller-skates'},\n\t\t{name:'Anna', toy:'Play-doh'},\n\t\t...\n\t\t]);\n*/\n\nJob done! You\u2019ve got a specification to share with your team and something useful for your users in the form of full examples, and you didn\u2019t even have to open another text editor.\n\nUse your documentation to share knowledge\n\nDocumentation isn\u2019t just for users. It\u2019s also used by internal developers to explain what they\u2019ve written and how it works. This is especially valuable where the team is large or the code-base sprawling.\n\nSo, returning to our example, the next step would be to share with the rest of the team (or at least a selection of the team if yours is large) what the documentation looks like. This is useful for two main reasons:\n\n\n\tThey can see if they understand what the documentation says the feature will do. It\u2019s best if they haven\u2019t seen the requirement before. If your fellow developers can\u2019t work out what \u2018MagicMethodX\u2019 is going to return from the docs, neither can your users.\n\tThey can check that the feature accomplishes everything that they expect to, and that it\u2019s consistent with the rest of the functionality.\n\n\nOn previous projects, we\u2019ve taken to referring to this stage of the development process as the \u2018bun fight\u2019. It\u2019s a chance for everyone to have an honest say and throw a few pies without actually causing anyone to have to rewrite any code. If you can identify at this stage that a feature is over-complicated, lacking or just plain useless, you\u2019ll all be much happier to throw out a few lines of documentation than you may have been to throw out a partial, or even complete, piece of functionality.\n\nDocumentation has your back\n\nThe final benefit to working in this way is that your documentation not only remains accurate, it\u2019s always as accurate as your latest release. It can\u2019t fall behind. You can increase the likelihood that your docs will remain up to date by unit testing your examples.\n\nReturning to the previous example, we can add a QUnit unit test to the expected output with ease during the build process \u2013 we know exactly how the code will look and, with the @example tag, we can identify easily where to find the bits that need testing. If it\u2019s tested it\u2019ll definitely work as you expect it to when a user copy and pastes it. You\u2019re ensuring quality from idea to implementation.\n\nAs an extra bauble, the best thing about a system like JSDocToolkit is that it\u2019ll take your inline comments and turn them into beautiful sites, as good systems will allow for customised output templates. You\u2019ll be producing full-featured sites for your projects and plugins with almost no extra effort, but all the benefits.", "year": "2010", "author": "Frances Berriman", "author_slug": "francesberriman", "published": "2010-12-11T00:00:00+00:00", "url": "https://24ways.org/2010/documentation-driven-design-for-apis/", "topic": "process"} {"rowid": 289, "title": "Front-End Developers Are Information Architects Too", "contents": "The theme of this year\u2019s World IA Day was \u201cInformation Everywhere, Architects Everywhere\u201d. This article isn\u2019t about what you may consider an information architect to be: someone in the user-experience field, who maybe studied library science, and who talks about taxonomies. This is about a realisation I had a couple of years ago when I started to run an increasing amount of usability-testing sessions with people who have disabilities: that the structure, labelling, and connections that can be made in front-end code is information architecture. People\u2019s ability to be successful online is unequivocally connected to the quality of the code that is written.\nPlaces made of information\nIn information architecture we talk about creating places made of information. These places are made of ones and zeros, but we talk about them as physical structures. We talk about going onto a social media platform, posting in blogs, getting locked out of an environment, and building applications. In 2002, Andrew Hinton stated:\n\nPeople live and work in these structures, just as they live and work in their homes, offices, factories and malls. These places are not virtual: they are as real as our own minds.\n25 Theses\n\nWe\u2019re creating structures which people rely on for significant parts of their lives, so it\u2019s critical that we carry out our work responsibly. This means we must use our construction materials correctly. Luckily, our most important material, HTML, has a well-documented specification which tells us how to build robust and accessible places. What is most important, I believe, is to understand the semantics of HTML.\nSemantics\nThe word \u201csemantic\u201d has its origin in Greek words meaning \u201csignificant\u201d, \u201csignify\u201d, and \u201csign\u201d. In the physical world, a structure can have semantic qualities that tell us something about it. For example, the stunning Westminster Abbey inspires awe and signifies much about the intent and purpose of the structure. The building\u2019s size; the quality of the stone work; the massive, detailed stained glass: these are all signs that this is a building meant for something the creators deemed important. Alternatively consider a set of large, clean, well-positioned, well-lit doors on the ground floor of an office block: they don\u2019t need an \u201centrance\u201d sign to communicate their use and to stop people trying to use a nearby fire exit to get into the building. The design of the doors signify their usage. Sometimes a more literal and less awe-inspiring approach to communicating a building\u2019s purpose happens, but the affect is similar: the building is signifying something about its purpose.\nHTML has over 115 elements, many of which have semantics to signify structure and affordance to people, browsers, and assistive technology. The HTML 5.1 specification mentions semantics, stating:\n\nElements, attributes, and attribute values in HTML are defined \u2026 to have certain meanings (semantics). For example, the
      element represents an ordered list, and the lang attribute represents the language of the content.\nHTML 5.1 Semantics, structure, and APIs of HTML documents\n\nHTML\u2019s baked-in semantics means that developers can architect their code to signify structure, create relationships between elements, and label content so people can understand what they\u2019re interacting with. Structuring and labelling information to make it available, usable, and understandable to people is what an information architect does. It\u2019s also what a front-end developer does, whether they realise it or not.\nA brief introduction to information architecture\nWe\u2019re going to start by looking at what an information architect is. There are many definitions, and I\u2019m going to quote Richard Saul Wurman, who is widely regarded as the father of information architecture. In 1976 he said an information architect is:\n\nthe individual who organizes the patterns inherent in data, making the complex clear; a person who creates the structure or map of information which allows others to find their personal paths to knowledge; the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding, and the science of the organization of information.\nOf Patterns And Structures\n\nTo me, this clearly defines any developer who creates code that a browser, or other user agent (for example, a screen reader), uses to create a structured, navigable place for people.\nJust as there are many definitions of what an information architect is, there are for information architecture itself. I\u2019m going to use the definition from the fourth edition of Information Architecture For The World Wide Web, in which the authors define it as:\nThe structural design of shared information environments.\nThe synthesis of organization, labeling, search, and navigation systems within digital, physical, and cross-channel ecosystems.\nThe art and science of shaping information products and experiences to support usability, findability, and understanding.\nInformation Architecture For The World Wide Web, 4th Edition\nTo me, this describes front-end development. Done properly, there is an art to creating robust, accessible, usable, and findable spaces that delight all our users. For example, at 2015\u2019s State Of The Browser conference, Edd Sowden talked about the accessibility of s. He discovered that by simply not using the semantically-correct
      element to mark up headings, in some situations browsers will decide that a
      is being used for layout and essentially make it invisible to assistive technology. Another example of how coding practices can affect the usability and findability of content is shown by L\u00e9onie Watson in her How ARIA landmark roles help screen reader users video. By using ARIA landmark roles, people who use screen readers are quickly able to identify and jump to common parts of a web page.\nOur definitions of information architects and information architecture mention patterns, rules, organisation, labelling, structure, and relationships. There are numerous different models for how these elements get boiled down to their fundamentals. In his Understanding Context book, Andrew Hinton calls them Labels, Relationships, and Rules; Jorge Arango calls them Links, Nodes, And Order; and Dan Klyn uses Ontology, Taxonomy, and Choreography, which is the one we\u2019re going to use. Dan defines these terms as:\nOntology\nThe definition and articulation of the rules and patterns that govern the meaning of what we intend to communicate.\nWhat we mean when we say what we say.\nTaxonomy\nThe arrangements of the parts. Developing systems and structures for what everything\u2019s called, where everything\u2019s sorted, and the relationships between labels and categories\nChoreography\nRules for interaction among the parts. The structures it creates foster specific types of movement and interaction; anticipating the way users and information want to flow and making affordance for change over time.\n\nWe now have definitions of an information architect, information architecture, and a model of the elements of information architecture. But is writing HTML really creating information or is it just wrangling data and metadata? When does data turn into information? In his book Managing For The Future Peter Drucker states:\n\n\u2026 data is not information. Information is data endowed with relevance and purpose.\nManaging For The Future\n\nIf we use the correct semantic element to mark up content then we\u2019re developing with purpose and creating relevance. For example, if we follow the advice of the HTML 5.1 specification and mark up headings using heading rank instead of the outline algorithm, we\u2019re creating a structure where the depth of one heading is relevant to the previous one. Architected correctly, an

      element should be relevant to its parent, which should be the

      . By following the HTML specification we can create a structured, searchable, labeled document that will hopefully be relevant to what our users need to be successful. If you\u2019ve never used a screen reader, you might be wondering how the headings on a page are searchable. Screen readers give users the ability to interact with headings in a couple of ways:\n\nby creating a list of headings so users can quickly scan the page for information\nby using a keyboard command to cycle through one heading at a time\n\nIf we had a document for Christmas Day TV we might structure it something like this:\n

      Christmas Day TV schedule

      \n

      BBC1

      \n

      Morning

      \n

      Evening

      \n

      BBC2

      \n

      Morning

      \n

      Evening

      \n

      ITV

      \n

      Morning

      \n

      Evening

      \n

      Channel 4

      \n

      Morning

      \n

      Evening

      \nIf I use VoiceOver to generate a list of headings, I get this:\n\nOnce I have that list I can use keyboard commands to filter the list based on the heading level. For example, I can press 2 to hear just the

      s:\n\nIf we hadn\u2019t used headings, of if we\u2019d nested them incorrectly, our users would be frustrated.\nPutting this together\nLet\u2019s put this together with an example of a button that, when pressed, toggles the appearance of a panel of links. There are numerous ways we could create a button on a web page, but the best way is to just use a \n\n
      \n \n
      \nThere\u2019s quite a bit going on here. We\u2019re using the:\n\naria-controls attribute to architect a connection between the \n \n\nColor picker\nThis is our new color picker function. It targets the input element by its class and gets its value. \nfunction getColor() {\n return document.querySelector(\".color\").value;\n}\nUp until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We\u2019ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor.\nfunction drawLine() {\n //...code... \n context.strokeStyle = getColor();\n context.lineWidth = 4;\n context.lineCap = \"round\";\n\n //...code... \n}\nClear button\nThis is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing.\nfunction clearCanvas() {\n if (confirm(\"Want to clear?\")) {\n context.clearRect(0, 0, w, h);\n }\n}\nThe method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner. \nIf we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up.\nLet\u2019s add this event handler to init:\nfunction init() {\n //...code...\n canvas.onpointermove = handleMouseMove;\n canvas.onpointerdown = handleMouseDown;\n canvas.onpointerup = stopDrawing;\n canvas.onpointerout = stopDrawing;\n document.querySelector(\".clear\").onclick = clearCanvas;\n}\nSee the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 3: Draw with 2 lines\nIt\u2019s time to make a line appear where no pointer has gone before. A ghost line! \nFor that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it\u2019s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn\u2019t matter which one we choose. Let\u2019s go with the x-axis. \nHere is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!). \nNow, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal.\nA sketch illustrating the mathematics of reflecting a point.\nWhat happens to the x coordinates?\nThe variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them \u201cthe x coordinates\u201d. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c. \nWhat happens to the y coordinates?\nWhat about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height.\nThis is the new code for the app\u2019s variables and the two lines: the one that fills the pointer\u2019s path and the one mirroring it across the x-axis.\nfunction drawLine() {\n var a = prevX, a_ = a,\n b = prevY, b_ = h-b,\n c = currX, c_ = c,\n d = currY, d_ = h-d;\n\n //... code ...\n\n // Draw line #1, at the pointer's location\n context.moveTo(a, b);\n context.lineTo(c, d);\n\n // Draw line #2, mirroring the line #1\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n //... code ...\n}\nIn case this was too abstract for you, let\u2019s look at some actual numbers to see how this works.\nLet\u2019s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3.\nSo b' = 10 - 2 = 8 and d' = 10 - 3 = 7.\nWe use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That\u2019s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior.\nIf you are still confused, I don\u2019t blame you. \nHere is the result. Draw something and see what happens.\nSee the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 4: Draw with 8 lines\nI have made yet another confusing sketch, with points C and D, so you understand what we\u2019re trying to do. Later on we\u2019ll look at points E, F, G and H as well. The circled point is the one we\u2019re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas. \nA sketch illustrating points C and D.\nThis is the part where the math gets a bit mathier, as our drawLine function evolves further. We\u2019ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let\u2019s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different).\nfunction drawLine() {\n\n //... code ... \n\n // Reassign values\n a_ = w-a; b_ = b;\n c_ = w-c; d_ = d;\n\n // Draw the 3rd line\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n // Reassign values\n a_ = w-a; b_ = h-b;\n c_ = w-c; d_ = h-d;\n\n // Draw the 4th line\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n //... code ... \nWhat is happening?\nYou might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That\u2019s because we want the symmetry to hold for a rectangular canvas as well, and this way it will. \nAlso, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It\u2019s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous). \nWhat happens to the x coordinates?\nAs you recall, our x coordinates are a (prevX) and c (currX).\nFor the third line we are adding, a' = w - a and c' = w - c, which means\u2026\nFor the fourth line, the same thing happens to our x coordinates a and c.\nWhat happens to the y coordinates?\nAs you recall, our y coordinates are b (prevY) and d (currY).\nFor the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis. \nFor the fourth line, b' = h - b and d' = h - d, which we\u2019ve seen before: that\u2019s a reflection across the x-axis.\nWe have four more lines, or locations, to define. Note: the part of the code that\u2019s responsible for drawing a micro-line between the newly calculated coordinates is always the same:\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\nWe can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments. \nOnce again, we need some concrete examples to see where we\u2019re going, so here\u2019s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code.\nA sketch illustrating points E and F.\nThis is the code for E and F:\n // Reassign for 5\n a_ = w/2+h/2-b; b_ = w/2+h/2-a;\n c_ = w/2+h/2-d; d_ = w/2+h/2-c;\n\n // Reassign for 6\n a_ = w/2+h/2-b; b_ = h/2-w/2+a;\n c_ = w/2+h/2-d; d_ = h/2-w/2+c;\nTheir x coordinates are identical and their y coordinates are reversed to one another.\nThis one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3).\nA sketch illustrating points G and H.\nThis is the code:\n // Reassign for 7\n a_ = w/2-h/2+b; b_ = w/2+h/2-a;\n c_ = w/2-h/2+d; d_ = w/2+h/2-c;\n\n // Reassign for 8\n a_ = w/2-h/2+b; b_ = h/2-w/2+a;\n c_ = w/2-h/2+d; d_ = h/2-w/2+c;\n //...code... \n}\nOnce again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won\u2019t go into the full details, since this has been a long enough journey as it is, and I think we\u2019ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them.\nI hope you had fun learning! This is our final app:\nSee the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen.", "year": "2018", "author": "Hagar Shilo", "author_slug": "hagarshilo", "published": "2018-12-02T00:00:00+00:00", "url": "https://24ways.org/2018/the-art-of-mathematics/", "topic": "code"} {"rowid": 236, "title": "Extreme Design", "contents": "Recently, I set out with twelve other designers and developers for a 19th century fortress on the Channel Island of Alderney. We were going to /dev/fort, a sort of band camp for geeks. Our cohort\u2019s mission: to think up, build and finish something \u2013 without readily available internet access.\n\n Alderney runway, photo by Chris Govias\n\n\n\nWait, no internet?\n\nWell, pretty much. As the creators of /dev/fort James Aylett and Mark Norman Francis put it: \u201cImagine a place with no distractions \u2013 no IM, no Twitter\u201d. But also no way to quickly look up a design pattern, code sample or source material. Like packing for camping, /dev/fort means bringing everything you\u2019ll need on your back or your hard drive: from long johns to your favourite icon set.\n\nWe got to work the first night discussing ideas for what we wanted to build. By the time breakfast was cleared up the next morning, we\u2019d settled on Russ\u2019s idea to make the Apollo 13 (PDF) transcript accessible. Days two and three were spent collaboratively planning (KJ style) what features we wanted to build, and unravelling the larger UX challenges of the project. The next five days were spent building it. Within 36 hours of touchdown at Southampton Airport, we launched our creation: spacelog.org\n\nThe weather was cold, the coal fire less than ideal, food and supplies a hike away, and the process lightning-fast. A week of designing under extreme circumstances called for an extreme process. Some of this was driven by James\u2019s and Norm\u2019s experience running these things, but a lot of it materialised while we were there \u2013 especially for our three-strong design team (myself, Gavin O\u2019 Carroll and Chris Govias) who, though we knew each other, had never worked together as a group in this kind of scenario before.\n\nThe outcome was a pretty spectacular process, with a some key takeaways useful for any small group trying to build something quickly.\n\nWhat it\u2019s like inside the fort\n\n/dev/fort has the pressure and pace of a hack day without being a hack day \u2013 primarily, no workshops or interruptions\u201a but also a different mentality. While hack days are typically developer-driven with a \u2018hack first, design later (if at all)\u2019 attitude, James was quick to tell the team to hold off from writing any code until we had a plan. This put a healthy pressure on the design and product folks to slash through the UX problems before we started building.\n\nWhile the fort had definitely more of a hack day feel, all of us were familiar with Agile methods, so we borrowed a few useful techniques such as morning stand-ups and an emphasis on teamwork. We cut some really good features to make our launch date, and chunked the work based on user goals, iterating as we went.\n\nWhat made this design process work?\n\nA golden ratio of teams\n\nMy personal experience both professionally and in free-form situations like this, is a tendency to get/hire a designer. Leaders of businesses, founders of start-ups, organisers of events: one designer is not enough! Finding one ace-blooded designer who can \u2018do everything\u2019 will always result in bottleneck and burnout. Like the nuances between different development languages, design is a multifaceted discipline, and very few can claim to be equally strong in every aspect. Overlap in skill set will result in a stronger, more robust interface.\n\nMore importantly, however, having lots of designers to go around meant that we all had the opportunity to pair with developers, polishing the details that don\u2019t usually get polished. As soon as we launched, the public reception of the design and UX was overwhelmingly positive (proof!). But also, a lot of people asked us who the designer was, attributing it to one person.\n\nWhile it\u2019s important to note that everyone in our team was multitalented (and could easily shift between roles, helping us all stay unblocked), the golden ratio James and Norm devised was two product/developer folks, three interaction designers and eight developers.\n\n photo by Ben Firshman\n\nEquality inside the fortress walls\n\nSomething magical about the fort is how everyone leaves the outside world on the drawbridge. Job titles, professional status, Twitter followers, and so on. Like scout camp, a mutual respect and trust is expected of all the participants. Like extreme programming, extreme design requires us all to be equal partners in a collaborative team. I think this is especially worth noting for designers; our past is filled with the clear hierarchy of the traditional studio system which, however important for taste and style, seems less compatible with modern web/software development methods.\n\nBeing equal doesn\u2019t mean being the same, however. We established clear roles and teams for ourselves on the second day, deferring to that person when a decision needed to be made. As the interface coalesced, the designers and developers took ownership over certain parts to ensure the details got looked after, while staying open to ideas and revisions from the rest of the cohort.\n\nCreate a space where everyone who enters is equal, but be sure to establish clear roles. Even if it\u2019s just for a short while, the environment will be beneficial.\n\n photo by Ben Firshman\n\nHang your heraldry from the rafters\n\nForts and castles are full of lore: coats of arms; paintings of battles; suits of armour. It\u2019s impossible not to be surrounded by these stories, words and ways of thinking. Like the whiteboards on the walls, putting organisational lore in your physical surroundings makes it impossible not to see.\n\nRyan Alexander brought some of those static-cling whiteboard sheets which were quickly filled with use cases; IA; team roles; and, most importantly, a glossary. As soon as we started working on the project, we realised we needed to get clear on what certain words meant: what was a logline, a range, a phase, a key moment? Were the back-end people using these words in the same way design and product was? Quickly writing up a glossary of terms meant everyone was instantly speaking the same language. There was no \u201cAh, I misunderstood because in the data structure x means y\u201d or, even worse, accidental seepage of technical language into the user interface copy.\n\nPut a glossary of your internal terminology somewhere big and fat on the wall. Stand around it and argue until you agree on what it says. Leave it up; don\u2019t underestimate the power of ambient communication and physical reference.\n\nPlan more, download less\n\nWhile internet is forbidden inside the fort, we did go on downloading expeditions: NASA photography; code documentation; and so on. The project wouldn\u2019t have been possible without a few trips to the web. We had two lists on the wall: groceries and supplies; internets \u2013 \u201cloo roll; Tom Stafford photo\u201c.\n\nThis changed our usual design process, forcing us to plan carefully and think of what we needed ahead of time. Getting to the internet was a thirty-minute hike up a snow covered cliff to the town airport, so you really had to need it, too. \n\n The path to the internet\n\nFor the visual design, especially, this resulted in more focus up front, and communication between the designers on what assets we required. It made us make decisions earlier and stick with them, creating less distraction and churn later in the process. \n\nTry it at home: unplug once you\u2019ve got the things you need. As an artist, it\u2019s easier to let your inner voice shine through if you\u2019re not looking at other people\u2019s work while creating.\n\nSocial design\n\nFinally, our design team experimented with a collaborative approach to wireframing. Once we had collectively nailed down use cases, IA, user journeys and other critical artefacts, we tried a pairing approach. One person drew in Illustrator in real time as the other two articulated what to draw. (This would work equally well with two people, but with three it meant that one of us could jump up and consult the lore on the walls or clarify a technical detail.) The result: we ended up considering more alternatives and quickly rallying around one solution, and resolved difficult problems more quickly.\n\nAt a certain stage we discovered it was more efficient for one person to take over \u2013 this happened around the time when the basic wireframes existed in Illustrator and we\u2019d collectively run through the use cases, making sure that everything was accounted for in a broad sense. At this point, take a break, go have a beer, and give yourself a pat on the back.\n\nPut the files somewhere accessible so everyone can use them as their base, and divide up the more detailed UI problems, screens or journeys. At this level of detail it\u2019s better to have your personal headspace.\n\nGavin called this \u2018social design\u2019. Chatting and drawing in real time turned what was normally a rather solitary act into a very social process, with some really promising results. I\u2019d tried something like this before with product or developer folks, and it can work \u2013 but there\u2019s something really beautiful about switching places and everyone involved being equally quick at drawing. That\u2019s not something you get with non-designers, and frequent swapping of the \u2018driver\u2019 and \u2018observer\u2019 roles is a key aspect to pairing.\n\nTackle the forest collectively and the trees individually \u2013 it will make your framework more robust and your details more polished. Win/win. \n\nThe return home\n\nGrateful to see a 3G signal on our phones again, our flight off the island was delayed, allowing for a flurry of domain name look-ups, Twitter catch-up, and e-mails to loved ones. A week in an isolated fort really made me appreciate continuous connectivity, but also just how unique some of these processes might be. \n\nYou just never know what crazy place you might be designing from next.", "year": "2010", "author": "Hannah Donovan", "author_slug": "hannahdonovan", "published": "2010-12-09T00:00:00+00:00", "url": "https://24ways.org/2010/extreme-design/", "topic": "process"} {"rowid": 21, "title": "Keeping Parts of Your Codebase Private on GitHub", "contents": "Open source is brilliant, there\u2019s no denying that, and GitHub has been instrumental in open source\u2019s recent success. I\u2019m a keen open-sourcerer myself, and I have a number of projects on GitHub. However, as great as sharing code is, we often want to keep some projects to ourselves. To this end, GitHub created private repositories which act like any other Git repository, only, well, private!\n\nA slightly less common issue, and one I\u2019ve come up against myself, is the desire to only keep certain parts of a codebase private. A great example would be my site, CSS Wizardry; I want the code to be open source so that people can poke through and learn from it, but I want to keep any draft blog posts private until they are ready to go live. Thankfully, there is a very simple solution to this particular problem: using multiple remotes.\n\nBefore we begin, it\u2019s worth noting that you can actually build a GitHub Pages site from a private repo. You can keep the entire source private, but still have GitHub build and display a full Pages/Jekyll site. I do this with csswizardry.net. This post will deal with the more specific problem of keeping only certain parts of the codebase (branches) private, and expose parts of it as either an open source project, or a built GitHub Pages site.\n\nN.B. This post requires some basic Git knowledge.\n\nAdding your public remote\n\nLet\u2019s assume you\u2019re starting from scratch and you currently have no repos set up for your project. (If you do already have your public repo set up, skip to the \u201cAdding your private remote\u201d section.)\n\nSo, we have a clean slate: nothing has been set up yet, we\u2019re doing all of that now. On GitHub, create two repositories. For the sake of this article we shall call them site.com and private.site.com. Make the site.com repo public, and the private.site.com repo private (you will need a paid GitHub account).\n\nOn your machine, create the site.com directory, in which your project will live. Do your initial work in there, commit some stuff \u2014 whatever you need to do. Now we need to link this local Git repo on your machine with the public repo (remote) on GitHub. We should all be used to this:\n\n$ git remote add origin git@github.com:[user]/site.com.git\n\nHere we are simply telling Git to add a remote called origin which lives at git@github.com:[user]/site.com.git. Simple stuff. Now we need to push our current branch (which will be master, unless you\u2019ve explicitly changed it) to that remote:\n\n$ git push -u origin master\n\nHere we are telling Git to push our master branch to a corresponding master branch on the remote called origin, which we just added. The -u sets upstream tracking, which basically tells Git to always shuttle code on this branch between the local master branch and the master branch on the origin remote. Without upstream tracking, you would have to tell Git where to push code to (and pull it from) every time you ran the push or pull commands. This sets up a permanent bond, if you like.\n\nThis is really simple stuff, stuff that you will probably have done a hundred times before as a Git user. Now to set up our private remote.\n\nAdding your private remote\n\nWe\u2019ve set up our public, open source repository on GitHub, and linked that to the repository on our machine. All of this code will be publicly viewable on GitHub.com. (Remember, GitHub is just a host of regular Git repositories, which also puts a nice GUI around it all.) We want to add the ability to keep certain parts of the codebase private. What we do now is add another remote repository to the same local repository. We have two repos on GitHub (site.com and private.site.com), but only one repository (and, therefore, one directory) on our machine. Two GitHub repos, and one local one.\n\nIn your local repo, check out a new branch. For the sake of this article we shall call the branch dev. This branch might contain work in progress, or draft blog posts, or anything you don\u2019t want to be made publicly viewable on GitHub.com. The contents of this branch will, in a moment, live in our private repository.\n\n$ git checkout -b dev\n\nWe have now made a new branch called dev off the branch we were on last (master, unless you renamed it).\n\nNow we need to add our private remote (private.site.com) so that, in a second, we can send this branch to that remote:\n\n$ git remote add private git@github.com:[user]/private.site.com.git\n\nLike before, we are just telling Git to add a new remote to this repo, only this time we\u2019ve called it private and it lives at git@github.com:[user]/private.site.com.git. We now have one local repo on our machine which has two remote repositories associated with it.\n\nNow we need to tell our dev branch to push to our private remote:\n\n$ git push -u private dev\n\nHere, as before, we are pushing some code to a repo. We are saying that we want to push the dev branch to the private remote, and, once again, we\u2019ve set up upstream tracking. This means that, by default, the dev branch will only push and pull to and from the private remote (unless you ever explicitly state otherwise).\n\nNow you have two branches (master and dev respectively) that push to two remotes (origin and private respectively) which are public and private respectively.\n\nAny work we do on the master branch will push and pull to and from our publicly viewable remote, and any code on the dev branch will push and pull from our private, hidden remote.\n\nAdding more branches\n\nSo far we\u2019ve only looked at two branches pushing to two remotes, but this workflow can grow as much or as little as you\u2019d like. Of course, you\u2019d never do all your work in only two branches, so you might want to push any number of them to either your public or private remotes. Let\u2019s imagine we want to create a branch to try something out real quickly:\n\n$ git checkout -b test\n\nNow, when we come to push this branch, we can choose which remote we send it to:\n\n$ git push -u private test\n\nThis pushes the new test branch to our private remote (again, setting the persistent tracking with -u).\n\nYou can have as many or as few remotes or branches as you like.\n\nCombining the two\n\nLet\u2019s say you\u2019ve been working on a new feature in private for a few days, and you\u2019ve kept that on the private remote. You\u2019ve now finalised the addition and want to move it into your public repo. This is just a simple merge. Check out your master branch:\n\n$ git checkout master\n\nThen merge in the branch that contained the feature:\n\n$ git merge dev\n\nNow master contains the commits that were made on dev and, once you\u2019ve pushed master to its remote, those commits will be viewable publicly on GitHub:\n\n$ git push\n\nNote that we can just run $ git push on the master branch as we\u2019d previously set up our upstream tracking (-u).\n\nMultiple machines\n\nSo far this has covered working on just one machine; we had two GitHub remotes and one local repository. Let\u2019s say you\u2019ve got yourself a new Mac (yay!) and you want to clone an existing project:\n\n$ git clone git@github.com:[user]/site.com.git\n\nThis will not clone any information about the remotes you had set up on the previous machine. Here you have a fresh clone of the public project and you will need to add the private remote to it again, as above.\n\nDone!\n\nIf you\u2019d like to see me blitz through all that in one go, check the showterm recording.\n\nThe beauty of this is that we can still share our code, but we don\u2019t have to develop quite so openly all of the time. Building a framework with a killer new feature? Keep it in a private branch until it\u2019s ready for merge. Have a blog post in a Jekyll site that you\u2019re not ready to make live? Keep it in a private drafts branch. Working on a new feature for your personal site? Tuck it away until it\u2019s finished. Need a staging area for a Pages-powered site? Make a staging remote with its own custom domain.\n\nAll this boils down to, really, is the fact that you can bring multiple remotes together into one local codebase on your machine. What you do with them is entirely up to you!", "year": "2013", "author": "Harry Roberts", "author_slug": "harryroberts", "published": "2013-12-09T00:00:00+00:00", "url": "https://24ways.org/2013/keeping-parts-of-your-codebase-private-on-github/", "topic": "code"} {"rowid": 303, "title": "We Need to Talk About Technical Debt", "contents": "In my work with clients, a lot of time is spent assessing old, legacy, sprawling systems and identifying good code, bad code, and technical debt.\nOne thing that constantly strikes me is the frequency with which bad code and technical debt are conflated, so let me start by saying this:\nNot all technical debt is bad code, and not all bad code is technical debt.\nSometimes your bad code is just that: bad code. Calling it technical debt often feels like a more forgiving and friendly way of referring to what may have just been a poor implementation or a substandard piece of work.\nIt is an oft-misunderstood phrase, and when mistaken for meaning \u2018anything legacy or old hacky or nasty or bad\u2019, technical debt is swept under the carpet along with all of the other parts of the codebase we\u2019d rather not talk about, and therein lies the problem.\nWe need to talk about technical debt.\nWhat We Talk About When We Talk About Technical Debt\nThe thing that separates technical debt from the rest of the hacky code in our project is the fact that technical debt, by definition, is something that we knowingly and strategically entered into. Debt doesn\u2019t happen by accident: debt happens when we choose to gain something otherwise-unattainable immediately in return for paying it back (with interest) later on.\nAn Example\nYou\u2019re a front-end developer working on a SaaS product, and your sales team is courting a large customer \u2013 a customer so large that you can\u2019t really afford to lose them. The customer tells you that as long as you can allow them to theme your SaaS application according to their branding, they are willing to sign on the dotted line\u2026 the problem being that your CSS architecture was never designed to incorporate theming at all, and there isn\u2019t currently a nice, clean way to incorporate a theme into the codebase.\nYou and the business make the decision that you will hack a theme into the product in two days. It\u2019s going to be messy, it\u2019s going to be ugly, but you can\u2019t afford to lose a huge customer just because your CSS isn\u2019t quite right, right now. This is technical debt.\nYou deliver the theme, the customer signs up, and everyone is happy. Except you (and the business, because you are one and the same) have a decision to make:\n\nDo we go back and build theming into the CSS architecture as a first-class citizen, porting the hacked theme back into a codified and formal framework?\nDo we carry on as we are? Things are working okay, and the customer paid up, so is there any reason to invest time and effort into things after we (and the customer) got what we wanted?\n\nOption 1 is choosing to pay off your debts; Option 2 is ignoring your repayments.\nWith Option 1, you\u2019re acknowledging that you did what you could given the constraints, but, free of constraints, you\u2019d have done something different. Now, you are choosing to implement that something different.\nWith Option 2, however, you are avoiding your responsibility to repay your debt, and you are letting interest accrue. The problem here is that\u2026\n\nyour SaaS product now offers theming to one of your customers;\nanother potential customer might also demand the ability to theme their instance of your product;\nyou can\u2019t refuse them that request, nor can you quickly fulfil it;\nyou hack in another theme, thus adding to the balance of your existing debt;\nand so on (plus interest) for every subsequent theme you need to implement.\n\nHere you have increased entropy whilst making little to no attempt to address what you already knew to be problems.\nYour second, third, fourth, fifth request for theming will be hacked on top of your hack, further accumulating debt whilst offering nothing by way of a repayment. After a long enough period, the code involved will get so unwieldy, so hard to work with, that you are forced to tear it all down and start again, and the most painful part of this is that you\u2019re actually paying off even more than your debt repayments would have been in the first place. Two days of hacking plus, say, five days of subsequent refactoring, would still have been substantially less than the weeks you will now have to spend rewriting your CSS to fix and incorporate the themes properly. You\u2019ve made a loss; your strategic debt ultimately became a loss-making exercise.\nThe important thing to note here is that you didn\u2019t necessarily write bad code. You knew there were two options: the quick way and the correct way. The decision to take the quick route was a definite choice, because you knew there was a better way. Implementing the better way is your repayment.\nGood Debt and Bad Debt\nTechnical debt is acceptable as long as you have intentions to settle; it can be a valuable solution to a business problem, provided the right approach is taken afterwards. That doesn\u2019t, however, mean that all debt is born equal. Just as in real life, there is good debt and there is bad debt.\nGood debt might be\u2026\n\na mortgage;\na student loan, or;\na business loan.\n\nThese are types of debt that will secure you the means of repaying them. These are well considered debts whose very reason for being will allow you to make the money to pay them off\u2014they have real, tangible benefit.\nA business loan to secure some equipment and premises will allow you to start an enterprise whose revenue will allow you to pay that debt back; a student loan will allow you to secure the kind of job that has the ability to pay a student loan back.\nThese kinds of debt involve a considered and well-balanced decision to acquire something in the short term in the knowledge that you will have the means, in the long term, to pay it back.\nConversely, bad debt might be\u2026\n\nborrowing $1,000 from a loan shark so you can go to Vegas, or;\ntaking out a payday loan in order to buy a new television.\n\nBoth of these kinds of debt will leave you paying for things that didn\u2019t provide you a way of earning your own capital. That is to say, the loans taken did not secure anything that would help pay off said loans. These are bad debts that will usually provide a net loss. You really are only gaining the short term in exchange for a long term financial responsibility: i.e., was it worth it?\nA good litmus test for debt is to compare the gains of its immediate benefit with the cost of its long term commitment.\nThe earlier example of theming a site is a good debt, provided we are keeping up our repayments (all debt is bad debt if you don\u2019t). A calculated decision to do something \u2018wrong\u2019 in the short term with the promise of better payoffs later on.\nBad Technical Debt\nThe majority of my work is with front-end development teams\u2014CSS is what I do. To that end, the most succinct example of technical debt for that audience is simply:\n!important\nAll front-end developers know the horrors and dangers associated with using !important, yet we continue to use it. Why?\nIt\u2019s not necessarily because we\u2019re bad developers, but because we see a shortcut. !important is usually implemented as a quick way out of a sticky specificity situation. We could spend the rest of the day refactoring our CSS to fix the issue at its source, or we can spend mere seconds typing the word !important and patch over the symptoms.\nThis is us making an explicit decision to do something less than ideal now in exchange for immediate benefit. After all, refactoring our CSS will take a lot more time, and will still only leave us with the same outcome that the vastly quicker !important solution will, so it seems to make better business sense.\nHowever, this is a bad debt. !important takes seconds to implement but weeks to refactor. The cost of refactoring this back out later will be an order of magnitude higher than it would be to have done things properly the first time. The first !important usually sets a precedent, and subsequent developers are likely to have to use it themselves in order to get around the one that you left.\nSo many CSS projects deteriorate because of this one simple word, and rewrites become more and more imminent. That makes it possibly the most costly 10 bytes a CSS developer could ever write.\nBad Code\nNow we\u2019ve got a good idea of what constitutes technical debt, let\u2019s take a look at what constitutes bad code. Something I hear time and time again in my client work goes a little like this:\n\nWe\u2019ve amassed a lot of technical debt and we\u2019d like to get a strategy in place\nto begin dealing with it.\n\nWhilst I genuinely admire their willingness to identify and desire to fix problems in their code, sometimes they\u2019re not looking at technical debt at\nall\u2014sometimes they\u2019re just looking at bad code, plain and simple.\nWhere technical debt is knowing that there\u2019s a better way, but the quicker way makes more sense right now, bad code is not caring if there\u2019s a better way at all.\nAgain, looking at a CSS-specific world, a lot of bad code is contributed by non-front-end developers with little training, appreciation, or even respect for the front-end landscape. Writing code with reckless abandon should not be described as technical debt, because to do so would imply that\u2026\n\nthe developers knew they were implementing a sub-par solution, but\u2026\nthe developers also knew that a better solution was out there, which\u2026\nimplies that it can be tidied up relatively simply.\n\nDevelopers writing bad code is a larger and more cultural problem that requires a lot more effort to fix. Hopefully\u2014and usually\u2014bad code is in the minority, but it helps to be objective in identifying and solving it. Bad code usually doesn\u2019t happen for a good enough reason, and is therefore much harder to justify.\nTechnical debt often represents ability in judgement, whereas bad code often represents a gap in skills.\nTakeaway\nTake time to familiarise yourself with the true concepts underlying technical debt and why it exists. Understand that technical debt can be good or bad. Admit that sometimes code is just of poor quality.\nUnderstanding these points will allow you to make better calls around what you might need to refactor and when, and what skills gaps you might have in your team.\n\nSometimes it\u2019s okay to cut corners if there is a tangible gain to be had in the immediate term.\nTechnical debt is okay provided it is a sensible debt and you have intentions to pay it off.\nTechnical debt is not necessarily synonymous with bad code, and bad code isn\u2019t necessarily technical debt. Technical debt is code that was implemented given limited knowledge or resource, with the understanding that you would need to repay something in future.\nTechnical debt is not inherently bad\u2014failure to make repayments is. Periodically, it is justifiable\u2014encouraged, even\u2014to enter a debt in order to fulfil a more pressing matter. However, it is imperative that we begin making repayments as soon as we are capable, be that based on newly available time or knowledge.\nBad code is worse than technical debt as it represents a lack of knowledge or quality control within a team. It needs a much more fundamental fix.", "year": "2016", "author": "Harry Roberts", "author_slug": "harryroberts", "published": "2016-12-05T00:00:00+00:00", "url": "https://24ways.org/2016/we-need-to-talk-about-technical-debt/", "topic": "code"} {"rowid": 60, "title": "What\u2019s Ahead for Your Data in 2016?", "contents": "Who owns your data? Who decides what can you do with it? Where can you store it? What guarantee do you have over your data\u2019s privacy? Where can you publish your work? Can you adapt software to accommodate your disability? Is your tiny agency subject to corporate regulation? Does another country have rights over your intellectual property?\nIf you aren\u2019t the kind of person who is interested in international politics, I hate to break it to you: in 2016 the legal foundations which underpin our work on the web are being revisited in not one but three major international political agreements, and every single one of those questions is up for grabs. These agreements \u2013 the draft EU Data Protection Regulation (EUDPR), the Trans-Pacific Partnership (TPP), and the draft Transatlantic Trade and Investment Partnership (TTIP) \u2013 stand poised to have a major impact on your data, your workflows, and your digital rights. While some proposed changes could protect the open web for the future, other provisions would set the internet back several decades.\nIn this article we will review the issues you need to be aware of as a digital professional. While each of these agreements covers dozens of topics ranging from climate change to food safety, we will focus solely on the aspects which pertain to the work we do on the web.\nThe Trans-Pacific Partnership\nThe Trans-Pacific Partnership (TPP) is a free trade agreement between the US, Japan, Malaysia, Vietnam, Singapore, Brunei, Australia, New Zealand, Canada, Mexico, Chile and Peru \u2013 a bloc comprising 40% of the world\u2019s economy. The agreement is expected to be signed by all parties, and thereby to come into effect, in 2016. This agreement is ostensibly about the bloc and its members working together for their common interests. However, the latest draft text of the TPP, which was formulated entirely in secret, has only been made publicly available on a Medium blog published by the U.S. Trade Representative which features a patriotic banner at the top proclaiming \u201cTPP: Made in America.\u201d The message sent about who holds the balance of power in this agreement, and whose interests it will benefit, is clear.\nBy far the most controversial area of the TPP has centred around the provisions on intellectual property. These include copyright terms of up to 120 years, mandatory takedowns of allegedly infringing content in response to just one complaint regardless of that complaint\u2019s validity, heavy and disproportionate penalties for alleged violations, and \u2013 most frightening of all \u2013 government seizures of equipment allegedly used for copyright violations. All of these provisions have been raised without regard for the fact that a trade agreement is not the appropriate venue to negotiate intellectual property law.\nOther draft TPP provisions would restrict the digital rights of people with disabilities by banning the workarounds they use every day. These include no exemptions for the adaptations of copywritten works for use in accessible technology (such as text-to-speech in ebook readers), a ban on circumventing DRM or digital locks in order to convert a file to an accessible format, and requiring the takedown of adapted works, such as a video with added subtitles, even if that adaptation would normally have fallen under the definition of fair use.\nThe e-commerce provisions would prohibit data localisation, the practice of requiring data to be physically stored on servers within a country\u2019s borders. Data localisation is growing in popularity following the Snowden revelations, and some of your own personal data may have been recently \u201clocalised\u201d in response to the Safe Harbor verdict. Prohibiting data localisation through the TPP would address the symptom but not the cause.\nThe Electronic Frontier Foundation has published an excellent summary of the digital rights issues raised by the agreement along with suggested actions American readers can take to speak out.\nTransatlantic Trade and Investment Partnership\nTTIP stands for the Transatlantic Trade and Investment Partnership, a draft free trade agreement between the United States and the EU. The plan has been hugely controversial and divisive, and the internet and digital provisions of the draft form just a small part of that contention.\nThe most striking digital provision of TTIP is an attempt to circumvent and override European data protection law. As EDRI, a European digital rights organisation, noted:\n\n\u201cthe US proposal would authorise the transfer of EU citizens\u2019 personal data to any country, trumping the EU data protection framework, which ensures that this data can only be transferred in clearly defined circumstances. For years, the US has been trying to bypass the default requirement for storage of personal data in the EU. It is therefore not surprising to see such a proposal being {introduced} in the context of the trade negotiations.\u201d\n\nThis draft provision was written before the Safe Harbor data protection agreement between the EU and US was invalidated by the Court of Justice of the European Union. In other words, there is no longer any protective agreement in place, and our data is as vulnerable as this political situation. However, data protection is a matter of its own law, the acting Data Protection Directive and the draft EU Data Protection Reform. A trade agreement, be it the TTIP or the TPP, is not the appropriate place to revamp a law on data protection.\nOther digital law issues raised by TTIP include the possibility of renegotiating standards on encryption (which in practice means lowering them) and renegotiating intellectual property rights such as copyright. The spectre of net neutrality has even put in an appearance, with an attempt to introduce rules on access to the internet itself being introduced as provisions.\nTTIP is still under discussion, and this month the EU trade representative said that \u201cwe agreed to further intensify our work during 2016 to help negotiations move forward rapidly.\u201d This has been cleverly worded: this means the agreement has little chance of being passed or coming into effect in 2016, which buys civil society more precious time to speak out.\nThe EU Data Protection Regulation\nOn 15 December 2015 the European Commission announced their agreement on the text of the draft General Data Protection Regulation. This law will replace its predecessor, the EU Data Protection Regulation of 1995, which has done a remarkable job of protecting data privacy across the continent throughout two decades of constant internet evolution.\nThe goal of the reform process has been to return power over data, and its uses, to citizens. Users will have more control over what data is captured about them, how it is used, how it is retained, and how it can be deleted. Businesses and digital professionals, in turn, will have to restructure their relationships with client and customer data. Compliance obligations will increase, and difficult choices will have to be made. However, this time should be seen as an opportunity to rethink our relationship with data. After Snowden, Schrems, and Safe Harbor, it is clear that we cannot go back to the way things were before. In an era of where every one of our heartbeats is recorded on a wearable device and uploaded to a surveilled data centre in another country, the need for reform has never been more acute.\nWhile texts of the draft GDPR are available, there is not enough mulled wine in the world that will help you get through them. Instead, the law firm Fieldfisher Waterhouse has produced this helpful infographic which will give you a good idea of the changes we can expect to see (view full size):\n\nThe most surprising outcome announced on 15 December was the new regulation\u2019s teeth. Under the new law, companies that fail to heed the updated data protection rules will face fines of up to 4% of their global turnover. Additionally, the law expands the liability for data protection to both the controller (the company hosting the data) and the data processor (the company using the data). The new law will also introduce a one-stop shop for resolving concerns over data misuse. Companies will no longer be able to headquarter their European operations in countries which are perceived to have relatively light-touch data protection enforcement (that means you, Ireland) as a means of automatically rejecting any complaints filed by citizens outside that country.\nFor digital professionals, the most immediate concern is analytics. In fact, I am going to make a prediction: in 2016 we will begin to see the same misguided war on analytics that we saw on cookies. By increasing the legal liabilities for both data processors and controllers \u2013 in other words, the company providing the analytics as well as the site administrator studying them \u2013 the new regulation risks creating disproportionate burdens as well as the same \u201cguilt by association\u201d risks we saw in 2012. There have already been statements made by some within the privacy community that analytics are tracking, and tracking is surveillance, therefore analytics are evil. Yet \u201cjust don\u2019t use analytics,\u201d as was suggested by one advocate, is simply not an option. European regulators should consult with the web community to gain a clear understanding of why analytics are vital to everyday site administrators, and must find a happy medium that protects users\u2019 data without criminalising every website by default. No one wants a repeat of the crisis of consent, as well as the scaremongering, caused by the cookie law.\nAssuming the text is adopted in 2016, the new EU Data Protection Regulation would not come into effect until 2018. We have a considerable challenge ahead, but we also have plenty of time to get it right.", "year": "2015", "author": "Heather Burns", "author_slug": "heatherburns", "published": "2015-12-21T00:00:00+00:00", "url": "https://24ways.org/2015/whats-ahead-for-your-data-in-2016/", "topic": "business"} {"rowid": 114, "title": "How To Create Rockband'ism", "contents": "There are mysteries happening in the world of business these days. We want something else by now. The business of business has to become more than business. We want to be able to identify ourselves with the brands we purchase and we want them to do good things. We want to feel cool because we buy stuff, and we don\u2019t just want a shopping experience \u2013 we want an engagement with a company we can relate to.\n\nLet me get back to \u201cfeeling cool\u201d \u2013 if we want to feel cool, we might get the companies we buy from to support that. That\u2019s why I am on a mission to make companies into rockbands.\n\nNow when I say rockbands \u2013 I don\u2019t mean the puke-y, drunky, nasty stuff that some people would highlight is also a part of rockbands. Therefore I have created my own word \u201crockband\u2019ism\u201d. This word is the definition of a childhood dream version of being in a rockband \u2013 the feeling of being more respected and loved and cool, than a cockroach or a suit on the floor of a company.\n\nRockband\u2019ism\n\nRockband\u2019ism is what we aspire to, to feel cool and happy.\n\nSo basically what I am arguing is that companies should look upon themselves as rockbands. Because the world has changed, so business needs to change as well.\n\nI have listed a couple of things you could do today to become a rockband, as a person or as a company.\n\n1 \u2013 Give your support to companies that make a difference to their surroundings \u2013 if you are buying electronics look up what the electronic producers are doing of good in the world (check out the Greenpeace Guide to Greener Electronics).\n\n2 \u2013 Implement good karma in your everyday life (and do well by doing good). What you give out you get back at some point in some shape \u2013 this can also be implemented for business.\n\n3 \u2013 WWRD? \u2013 \u201cwhat would a rockband do\u201d? or if you are into Kenny Rogers \u2013 what would he do in any given situation? This will also show yourself where your business or personal integrity lies because you actually act as a person or a rockband you admire.\n\n4 \u2013 Start leading instead of managing \u2013 If we can measure stuff why should we manage it? Leadership is key here instead of management. When you lead you tell people how to reach the stars, when you manage you keep them on the ground.\n\n5 \u2013 Respect and confide in, that people are the best at what they do. If they aren\u2019t, they won\u2019t be around for long. If they are and you keep on buggin\u2019 them, they won\u2019t be around for long either.\n\n6 \u2013 Don\u2019t be arrogant \u2013 Because audiences can\u2019t stand it \u2013 talk to people as a person not as a company.\n\n7 \u2013 Focus on your return on involvement \u2013 know that you get a return on, what you involve yourself in. No matter if it\u2019s bingo, communities, talks, ornithology or un-conferences.\n\n8 \u2013 Find out where you can make a difference and do it. Don\u2019t leave it up to everybody else to save the world.\n\n9 \u2013 Find out what you can do to become an authentic, trustworthy and remarkable company. Maybe you could even think about this a lot and make these thoughts into an actionplan.\n\n10 \u2013 Last but not least \u2013 if you\u2019re not happy \u2013 do something else, become another type of rockband, maybe a soloist of a sort, or an orchestra.\n\nNo more business as usual\n\nThis really isn\u2019t time for more business as usual, our environment (digital, natural, work or any other kind of environment) is changing. You are going to have to change too.\n\nThis article actually sprang from a talk I did at the Shift08 conference in Lisbon in October. In addition to this article for 24 ways I have turned the talk into an eBook that you can get on Toothless Tiger Press for free.\n\nMay you all have a sustainable and great Christmas full of great moments with your loved ones. December is a month for gratitude, enjoyment and love.", "year": "2008", "author": "Henriette Weber", "author_slug": "henrietteweber", "published": "2008-12-07T00:00:00+00:00", "url": "https://24ways.org/2008/how-to-create-rockbandism/", "topic": "business"} {"rowid": 299, "title": "What the Heck Is Inclusive Design?", "contents": "Naming things is hard. And I don\u2019t just mean CSS class names and JSON properties. Finding the right term for what we do with the time we spend awake and out of bed turns out to be really hard too.\nI\u2019ve variously gone by \u201cfront-end developer\u201d, \u201cuser experience designer\u201d, and \u201caccessibility engineer\u201d, all clumsy and incomplete terms for labeling what I do as an\u2026 erm\u2026 see, there\u2019s the problem again.\nIt\u2019s tempting to give up entirely on trying to find the right words for things, but this risks summarily dispensing with thousands of years spent trying to qualify the world around us. So here we are again.\nRecently, I\u2019ve been using the term \u201cinclusive design\u201d and calling myself an \u201cinclusive designer\u201d a lot. I\u2019m not sure where I first heard it or who came up with it, but the terminology feels like a good fit for the kind of stuff I care to do when I\u2019m not at a pub or asleep.\nThis article is about what I think \u201cinclusive design\u201d means and why I think you might like it as an idea.\nIsn\u2019t \u2018inclusive design\u2019 just \u2018accessibility\u2019 by another name?\nNo, I don\u2019t think so. But that\u2019s not to say the two concepts aren\u2019t related. Note the \u2018design\u2019 part in \u2018inclusive design\u2019 \u2014 that\u2019s not just there by accident. Inclusive design describes a design activity; a way of designing things.\nThis sets it apart from accessibility \u2014 or at least our expectations of what \u2018accessibility\u2019 entails. Despite every single accessibility expert I know (and I know a lot) recommending that accessibility should be integrated into design process, it is rarely ever done. Instead, it is relegated to an afterthought, limiting its effect.\nThe term \u2018accessibility\u2019 therefore lacks the power to connote design process. It\u2019s not that we haven\u2019t tried to salvage the term, but it\u2019s beginning to look like a lost cause. So maybe let\u2019s use a new term, because new things take new names. People get that.\nThe \u2018access\u2019 part of accessibility is also problematic. Before we get ahead of ourselves, I don\u2019t mean access is a problem \u2014 access is good, and the more accessible something is the better. I mean it\u2019s not enough by itself.\nImagine a website filled with poorly written and lackadaisically organized information, including a bunch of convoluted and confusing functionality. To make this site accessible is to ensure no barriers prevent people from accessing the content. \nBut that doesn\u2019t make the content any better. It just means more people get to suffer it. \nWhoopdidoo.\nAccess is certainly a prerequisite of inclusion, but accessibility compliance doesn\u2019t get you all the way there. It\u2019s possible to check all the boxes but still be left with an unusable interface. And unusable interfaces are necessarily inaccessible ones. Sure, you can take an unusable interface and make it accessibility compliant, but that only placates stakeholders\u2019 lawyers, not users. Users get little value from it.\nSo where have we got to? Access is important, but inclusion is bigger than access. Inclusive design means making something valuable, not just accessible, to as many people as we can.\nSo inclusive design is kind of accessibility + UX?\nCloser, but there are some problems with this definition.\nUX is, you will have already noted, a broad term encompassing activities ranging from conducting research studies to optimizing the perceived affordance of interface elements. But overall, what I take from UX is that it\u2019s the pursuit of making interfaces understandable.\nAs it happens, WCAG 2.0 already contains an \u2018Understandable\u2019 principle covering provisions such as readability, predictability and feedback. So you might say accessibility \u2014 at least as described by WCAG \u2014 already covers UX.\nUnfortunately, the criteria are limited, plus some really important stuff (like readability) is relegated to the AAA level; essentially \u201cbonus points if you get the time (you won\u2019t).\u201d\nSo better to let UX folks take care of this kind of thing. It\u2019s what they do. Except, therein lies a danger. UX professionals don\u2019t tend to be well versed in accessibility, so their \u2018solutions\u2019 don\u2019t tend to work for that many people. My friend Billy Gregory coined the term SUX, or \u201cSome UX\u201d: if it doesn\u2019t work for different users, it\u2019s only doing part of the job it should be. \nSUX won\u2019t do, but it\u2019s not just a disability issue. All sorts of user circumstances go unchecked when you\u2019re shooting straight for what people like, and bypassing what people need: device type, device settings, network quality, location, native language, and available time to name just a few.\nIn short, inclusive design means designing things for people who aren\u2019t you, in your situation. In my experience, mainstream UX isn\u2019t very good at that. By bolting accessibility onto mainstream UX we labor under the misapprehension that most people have a \u2018normal\u2019 experience, a few people are exceptions, and that all of the exceptions pertain to disability directly.\nSo inclusive design isn\u2019t really about disability?\nIt is about disability, but not in the same way as accessibility. Accessibility (as it is typically understood, anyway) aims to make sure things work for people with clinically recognized disabilities. Inclusive design aims to make sure things work for people, not forgetting those with clinically recognized disabilities. A subtle, but not so subtle, difference.\nLet\u2019s go back to discussing readability, because that\u2019s a good example. Now: everyone benefits from readable text; text with concise sentences and widely-understood words. It certainly helps people with cognitive impairments, but it doesn\u2019t hinder folks who have less trouble with comprehension. In fact, they\u2019ll more than likely be thankful for the time saved and the clarity. Readable text covers the whole gamut. It\u2019s \u2014 you\u2019ve got it \u2014 inclusive.\nLegibility is another one. A clear, well-balanced typeface makes the reading experience less uncomfortable and frustrating for all concerned, including those who have various forms of visual dyslexia. Again, everyone\u2019s happy \u2014 so why even contemplate a squiggly, sketchy typeface? Leave well alone.\nContrast too. No one benefits from low contrast; everyone benefits from high contrast. Simple. There\u2019s no more work involved, it just entails better decision making. And that\u2019s what design is really: decision making.\nHow about zoom support? If you let your users pinch zoom on their phones they can compensate for poor eyesight, but they can also increase the touch area of controls, inspect detail in images, and compose better screen shots. Unobtrusively supporting options like zoom makes interfaces much more inclusive at very little cost.\nAnd when it comes to the underlying HTML code, you\u2019re in luck: it has already been designed, from the outset, to be inclusive. HTML is a toolkit for inclusion. Using the right elements for the job doesn\u2019t just mean the few who use screen readers benefit, but keyboard accessibility comes out-of-the-box, you can defer to browser behavior rather than writing additional scripts, the code is easier to read and maintain, and editors can create content that is effortlessly presentable. \nWait\u2026 are you talking about universal design?\nHmmm. Yes, I guess some folks might think of \u201cuniversal design\u201d and \u201cinclusive design\u201d as synonymous. I just really don\u2019t like the term universal in this context. \nThe thing is, it gives the impression that you should be designing for absolutely everyone in the universe. Though few would adopt a literal interpretation of \u201cuniversal\u201d in this context, there are enough developers who would deliberately misconstrue the term and decry universal design as an impossible task. I\u2019ve actually had people push back by saying, \u201cwhat, so I\u2019ve got to make it work for people who are allergic to computers? What about people in comas?\u201d\nFor everyone\u2019s sake, I think the term \u2018inclusive\u2019 is less misleading. Of course you can\u2019t make things that everybody can use \u2014 it\u2019s okay, that\u2019s not the aim. But with everything that\u2019s possible with web technologies, there\u2019s really no need to exclude people in the vast numbers that we usually are. \nAccessibility can never be perfect, but by thinking inclusively from planning, through prototyping to production, you can cast a much wider net. That means more and happier users at very little if any more effort.\nIf you like, inclusive design is the means and accessibility is the end \u2014 it\u2019s just that you get a lot more than just accessibility along the way.\nConclusion\nThat\u2019s inclusive design. Or at least, that\u2019s a definition for a thing I think is a good idea which I identify as inclusive design. I\u2019ll leave you with a few tips.\nInvolve code early\nWeb interfaces are made of code. If you\u2019re not working with code, you\u2019re not working on the interface. That\u2019s not to say there\u2019s anything wrong with sketching or paper prototyping \u2014 in fact, I recommend paper prototyping in my book on inclusive design. Just work with code as soon as you can, and think about code even before that. Maintain a pattern library of coded solutions and omit any solutions that don\u2019t adhere to basic accessibility guidelines.\nRespect conventions\nYour content should be fresh, inventive, radical. Your interface shouldn\u2019t. Adopt accepted conventions in the appearance, placement and coding of interface elements. Users aren\u2019t there to experience interface design; they\u2019re there to use an interface. In other words: stop showing off (unless, of course, the brief is to experiment with new paradigms in interface design, for an audience of interface design researchers).\nDon\u2019t be exact\n\u201cPerfection is the enemy of good\u201d. But the pursuit of perfection isn\u2019t just to be avoided because nothing ever gets finished. Exacting design also makes things inflexible and brittle. If your design depends on elements retaining precise coordinates, they\u2019ll break easily when your users start adjusting font settings or zooming. Choose not to position elements exactly or give them fixed, \u201cmagic number\u201d dimensions. Make less decisions in the interface so your users can make more decisions for it.\nEnforce simplicity\nThe virtue of simplicity is difficult to overestimate. The simpler an interface is, the easier it is to use for all kinds of users. Simpler interfaces require less code to make too, so there\u2019s an obvious performance advantage. There are many design decisions that require user research, but keeping things simple is always the right thing to do. Not simplified or simple-seeming or simplistic, but simple. \nDo a little and do it well, for as many people as you can.", "year": "2016", "author": "Heydon Pickering", "author_slug": "heydonpickering", "published": "2016-12-07T00:00:00+00:00", "url": "https://24ways.org/2016/what-the-heck-is-inclusive-design/", "topic": "process"} {"rowid": 136, "title": "Making XML Beautiful Again: Introducing Client-Side XSL", "contents": "Remember that first time you saw XML and got it? When you really understood what was possible and the deep meaning each element could carry? Now when you see XML, it looks ugly, especially when you navigate to a page of XML in a browser. Well, with every modern browser now supporting XSL 1.0, I\u2019m going to show you how you can turn something as simple as an ATOM feed into a customised page using a browser, Notepad and some XSL.\n\nWhat on earth is this XSL?\n\nXSL is a family of recommendations for defining XML document transformation and presentation. It consists of three parts:\n\n\n\tXSLT 1.0 \u2013 Extensible Stylesheet Language Transformation, a language for transforming XML\n\tXPath 1.0 \u2013 XML Path Language, an expression language used by XSLT to access or refer to parts of an XML document. (XPath is also used by the XML Linking specification)\n\tXSL-FO 1.0 \u2013 Extensible Stylesheet Language Formatting Objects, an XML vocabulary for specifying formatting semantics\n\n\nXSL transformations are usually a one-to-one transformation, but with newer versions (XSL 1.1 and XSL 2.0) its possible to create many-to-many transformations too. So now you have an overview of XSL, on with the show\u2026\n\nSo what do I need?\n\nSo to get going you need a browser an supports client-side XSL transformations such as Firefox, Safari, Opera or Internet Explorer. Second, you need a source XML file \u2013 for this we\u2019re going to use an ATOM feed from Flickr.com. And lastly, you need an editor of some kind. I find Notepad++ quick for short XSLs, while I tend to use XMLSpy or Oxygen for complex XSL work. \n\nBecause we\u2019re doing a client-side transformation, we need to modify the XML file to tell it where to find our yet-to-be-written XSL file. Take a look at the source XML file, which originates from my Flickr photos tagged sky, in ATOM format.\n\nThe top of the ATOM file now has an additional instruction, as can been seen on Line 2 below. This instructs the browser to use the XSL file to transform the document.\n\n\n\n\n\nYour first transformation\n\nYour first XSL will look something like this:\n\n\n\n\t\n\n\nThis is pretty much the starting point for most XSL files. You will notice the standard XML processing instruction at the top of the file (line 1). We then switch into XSL mode using the XSL namespace on all XSL elements (line 2). In this case, we have added namespaces for ATOM (line 4) and Dublin Core (line 5). This means the XSL can now read and understand those elements from the source XML. \n\nAfter we define all the namespaces, we then move onto the xsl:output element (line 6). This enables you to define the final method of output. Here we\u2019re specifying html, but you could equally use XML or Text, for example. The encoding attributes on each element do what they say on the tin. As with all XML, of course, we close every element including the root.\n\nThe next stage is to add a template, in this case an as can be seen below:\n\n\n\n\t\n\t\n\t\t\n\t\t\t\n\t\t\t\tMaking XML beautiful again : Transforming ATOM\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\n\t\n\n\nThe beautiful thing about XSL is its English syntax, if you say it out loud it tends to make sense. \n\nThe / value for the match attribute on line 8 is our first example of XPath syntax. The expression / matches any element \u2013 so this will match against any element in the document. As the first element in any XML document is the root element, this will be the one matched and processed first.\n\nOnce we get past our standard start of a HTML document, the only instruction remaining in this is to look for and match all elements using the in line 14, above.\n\n\n\n\t\n\t\n\t\t\n\t\n\t\n\t\t
      \n\t\t\t

      \n\t\t\t\t\n\t\t\t

      \n\t\t\t

      \n\t\t\t\t\n\t\t\t

      \n\t\t\t
        \n\t\t\t\t\n\t\t\t
      \n\t\t
      \n\t
      \n
      \n\nThis new template (line 12, above) matches and starts to write the new HTML elements out to the output stream. The does exactly what you\u2019d expect \u2013 it finds the value of the item specifed in its select attribute. With XPath you can select any element or attribute from the source XML. \n\nThe last part is a repeat of the now familiar from before, but this time we\u2019re using it inside of a called template. Yep, XSL is full of recursion\u2026\n\n\n\t
    1. \n\t\t

      \n\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t

      \n\t\t

      \n\t\t\t()\n\t\t

      \n\t\t

      \n\t\t\t\n\t\t

      \n\t\t\n\t
    2. \n
      \n\nThe which matches atom:entry (line 1) occurs every time there is a element in the source XML file. So in total that is 20 times, this is naturally why XSLT is full of recursion. This has been matched and therefore called higher up in the document, so we can start writing list elements directly to the output stream. The first part is simply a

      with a link wrapped within it (lines 3-7). We can select attributes using XPath using @. \n\nThe second part of this template selects the date, but performs a XPath string function on it. This means that we only get the date and not the time from the string (line 9). This is achieved by getting only the part of the string that exists before the T. \n\nRegular Expressions are not part of the XPath 1.0 string functions, although XPath 2.0 does include them. Because of this, in XSL we tend to rely heavily on the available XML output. \n\nThe third part of the template (line 12) is a again, but this time we use an attribute of called disable output escaping to turn escaped characters back into XML. \n\nThe very last section is another call, taking us three templates deep. Do not worry, it is not uncommon to write XSL which go 20 or more templates deep!\n\n\n\t\n\t\t\n\t\t\t\n\t\t\t\ttag\n\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t \n\t\n\n\nIn our final , we see a combination of what we have done before with a couple of twists. Once we match atom:category we then count how many elements there are at that same level (line 2). The XPath . means \u2018self\u2019, so we count how many category elements are within the element. \n\nFollowing that, we start to output a link with a rel attribute of the predefined text, tag (lines 4-6). In XSL you can just type text, but results can end up with strange whitespace if you do (although there are ways to simply remove all whitespace). \n\nThe only new XPath function in this example is concat(), which simply combines what XPaths or text there might be in the brackets. We end the output for this tag with an actual tag name (line 10) and we add a space afterwards (line 12) so it won\u2019t touch the next tag. (There are better ways to do this in XSL using the last() XPath function). \n\nAfter that, we go back to the element again if there is another category element, otherwise we end the loop and end this .\n\nA touch of style\n\nBecause we\u2019re using recursion through our templates, you will find this is the end of the templates and the rest of the XML will be ignored by the parser. Finally, we can add our CSS to finish up. (I have created one for Flickr and another for News feeds)\n\n\n\nSo we end up with a nice simple to understand but also quick to write XSL which can be used on ATOM Flickr feeds and ATOM News feeds. With a little playing around with XSL, you can make XML beautiful again.\n\nAll the files can be found in the zip file (14k)", "year": "2006", "author": "Ian Forrester", "author_slug": "ianforrester", "published": "2006-12-07T00:00:00+00:00", "url": "https://24ways.org/2006/beautiful-xml-with-xsl/", "topic": "code"} {"rowid": 142, "title": "Revealing Relationships Can Be Good Form", "contents": "A few days ago, a colleague of mine \u2013 someone I have known for several years, who has been doing web design for several years and harks back from the early days of ZDNet \u2013 was running through a prototype I had put together for some user testing. As with a lot of prototypes, there was an element of \u2018smoke and mirrors\u2019 to make things look like they were working. \n\nOne part of the form included a yes/no radio button, and selecting the Yes option would, in the real and final version of the form, reveal some extra content. Rather than put too much JavaScript in the prototype, I took a preverbial shortcut and created a link which I wrapped around the text next to the radio button \u2013 clicking on that link would cause the form to mimic a change event on the radio button. But it wasn\u2019t working for him. \n\nWhy was that? Because whereas I created the form using a

      \n \n \n \n ...\n\nCheck out the example.\n\nFun with Backgrounds\n\nPop in a tiled background to give your table some character! Internet Explorer\u2019s PNG hack unfortunately only works well when applied to a cell.\n\nTo figure out which background will appear over another, just remember the hierarchy:\n\n (bottom) Table \u2192 Column \u2192 Row Group \u2192 Row \u2192 Cell (top)\n\nThe Future is Bright\n\nOnce browser-makers start implementing CSS3, we\u2019ll have more power at our disposal. Just with :first-child and :last-child, you can pull off a scalable version of our previous table with rounded corners and all \u2014 unfortunately, only Firefox manages to pull this one off successfully. And the selector the masses are clamouring for, nth-child, will make zebra tables easy as eggnog.", "year": "2005", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2005-12-19T00:00:00+00:00", "url": "https://24ways.org/2005/tables-with-style/", "topic": "code"} {"rowid": 112, "title": "User Styling", "contents": "During the recent US elections, Twitter decided to add an \u2018election bar\u2019 as part of their site design. You could close it if it annoyed you, but the action wasn\u2019t persistent and the bar would always come back like a bad penny. \n\nThe solution to common browsing problems like this is CSS. \u2018User styling\u2019 (or the creepy \u2018skinning\u2019) is the creation of CSS rules to customise and personalise a particular domain. Aside from hiding adverts and other annoyances, there are many reasons for taking the time and effort to do it:\n\n\n\tImproving personal readability by changing text size and colour\n\tPersonalising the look of a web app like GMail to look less insipid\n\tRevealing microformats\n\tSport! My dreams of site skinning tennis are not yet fully realised, but it\u2019ll be all the rage by next Christmas, believe me.\n\n\nHopefully you\u2019re now asking \u201cBut how? HOW?!\u201d. The process of creating a site skin is roughly as follows:\n\n\n\tSee something you want to change\n\tFind out what it\u2019s called, and if any rules already apply to it\n\tWrite CSS rule(s) to override and/or enhance it.\n\tApply the rules\n\n\nSo let\u2019s get stuck in\u2026\n\nSee something\n\nLet\u2019s start small with Multimap.com. Look at that big header \u2013 it takes up an awful lot of screen space doesn\u2019t it? \n\n\n\nNo matter, we can fix it.\n\nTools\n\nNow we need to find out where that big assed header is in the DOM, and make overriding CSS rules. The best tool I\u2019ve found yet is the Mac OS X app, CSS Edit. It utilises a slick \u2018override stylesheets\u2019 function and DOM Inspector. Rather than give you all the usual DOM inspection tools, CSS Edit\u2019s is solely concerned with style. Go into \u2018X-Ray\u2019 mode, click an element, and look at the inspector window to see every style rule governing it. Click the selector to be taken to where it lives in the CSS. It really is a user styling dream app.\n\n\n\nHaving said all that, you can achieve all this with free, cross platform tools \u2013 namely Firefox with the Firebug and Stylish extensions. We\u2019ll be using them for these examples, so make sure you have them installed if you want to follow along.\n\n\n\nUsing Firebug, we can see that the page is very helpfully marked up, and that whole top area is simply a div with an ID of header. \n\nChange Something\n\nWhen you installed Stylish, it added a page and brush icon to your status bar. Click on that, and choose Write Style > for Multimap.com. The other options allow you to only create a style for a particular part of a website or URL, but we want this to apply to the whole of Multimap:\n\n\n\nThe \u2018Add Style\u2019 window then pops up, with the @-moz-document query at the top:\n\n@namespace url(http://www.w3.org/1999/xhtml);\n@-moz-document domain(\"multimap.com\") {\n}\n\nAll you need to do is add the CSS to hide the header, in between the curly brackets.\n\n@namespace url(http://www.w3.org/1999/xhtml);\n@-moz-document domain(\"multimap.com\") {\n #header {display: none;} \n}\n\n\n\nA click of the preview button shows us that it\u2019s worked! Now the map appears further up the page. The ethics of hiding adverts is a discussion for another time, but let\u2019s face it, when did you last whoop at the sight of a banner?\n\nMake Something Better\n\nIf we\u2019re happy with our modifications, all we need to do is give it a name and save. Whenever you visit Multimap.com, the style will be available. Stylish also allows you to toggle a style on/off via the status bar menu. If you feel you want to share this style with the world, then userstyles.org is the place to do it. It\u2019s a grand repository of customisations that Stylish connects with. Whenever you visit a site, you can see if anyone else has written a style for it, again, via the status bar menu \u201cFind Styles for this Page\u201d. Selecting this with \u201cBBC News\u201d shows that there are plenty of options, ranging from small layout tweaks to redesigns:\n\n\n\nWhat\u2019s more, whenever a style is updated, Stylish will notify you, and offer a one-click process to update it. This does only work in Firefox and Flock, so I\u2019ll cover ways of applying site styles to other browsers later.\n\nSpecific Techniques\n\nImportant!\n\nIn the Multimap example there wasn\u2019t a display specified on that element, but it isn\u2019t always going to be that easy. You may have spent most of your CSS life being a good designer and not resorting to adding !important to give your rule priority. There\u2019s no way to avoid this in user styling \u2013 if you\u2019re overriding an existing rule it\u2019s a necessity! Be prepared to be typing !important a lot.\n\nStar Selector\n\nThe Universal Selector is a particularly useful way to start a style. For example, if we want to make Flickr use Helvetica before Arial (as they should\u2019ve done!), we can cover all occurrences with just one rule:\n\n* {font-family: \"Helvetica Neue\", Helvetica, sans-serif !important;}\n\nYou can also use it to select \u2018everything within an element\u2019, by placing it after the element name:\n\n#content * {font-family: \"Helvetica Neue\", Helvetica, sans-serif !important;}\n\nSwapping Images\n\nIf you\u2019re changing something a little more complex, such as Google Reader, then at some point you\u2019ll probably want to change an . The technique for replacing an image involves:\n\n\n\tmaking your replacement image the background of the tag\n\tadding padding top and left to the size of you image to push the \u2018top\u2019 image away\n\tmaking the height and width zero.\n\n\n\n\nThe old image is then pushed out of the way and hidden from view, allowing the replacement in the background to be revealed. Targeting the image may require using an attribute selector:\n\nimg[src=\"/reader/ui/3544433079-tree-view-folder-open.gif\"] {\n\tpadding: 16px 0 0 16px;\n\twidth: 0 !important;\n\theight: 0 !important;\n\tbackground-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYA\nAAAf8/9hAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAA\nBx0RVh0U29mdHdhcmUAQWRvYmUgRmlyZXdvcmtzIENTM5jWRgMAAAAVdE\nVYdENyZWF0aW9uIFRpbWUAMjkvNi8wOJJ/BVgAAAG3SURBVDiNpZIhb5RBEIaf\n2W+vpIagIITSBIHBgsGjEYQaFLYShcITDL+ABIPnh4BFN0GQNFA4Cnf3fbszL2L3\njiuEVLDJbCazu8+8Mzsmif9ZBvDy7bvXlni0HRe8eXL/zuPzABng62J5kFKaAQS\nQgJAOgHMB9vDZq+d71689Hcyw9LfAZAYdioE10VSJo6OPL/KNvSuHD+7dhU\n0vHEsDUUWJChIlYJIjFx5BuMB2mJY/DnMoOJl/R147oBUR0QAm8LAGCOEh3IO\nULiAl8jSOy/nPetGsbGRKjktEiBCEHMlQj4loCuu4zCXCi4lUHTNDtGqEiACTqAFSI\nOgAUAKv4bkWVy2g6tAbJtGy0TNugM3HADmlurKH27dVZSecxjboXggiAsMItR\nh99wTILdewYRpXVJWtY85k7fPW8e1GpJFJacgesXs6VYYomz9G2yDhwPB7NEB\nBDAMK7WYJlisYVBCpfaJBeB+eocFyVyAgCaoMCTJSTOOCWSyILrAnaXpSexRsx\nGGAZ0AR+XT+5fjzyfwSpnUB/1w64xizVI/t6q3b+58+vJ96mWtLf9haxNoc8M\nv7N3d+AT4XPcFIxghoAAAAAElFTkSuQmCC) no-repeat !important;\n}\n\nWoah boy! What was all that gubbins in the background-image? It was a Data URI, and you can create these easily with Hixie\u2019s online tool. It\u2019s simply the image translated into text so that it can be embedded in the CSS, cutting down on the number of http requests. It\u2019s also a necessity with Mozilla browsers, as they don\u2019t allow user CSS to reference images stored locally. Converting images to URI\u2019s avoids this, as well as making a style easily portable \u2013 no images folder to pass around. \n\nDon\u2019t forget all your other CSS techniques at your disposal: inserting your own content with :before and :after pseudo classes, make elements semi-transparent with opacity and round box corners without hacking . You can have fun, and for once, enjoy the freedom of not worrying about IE!\n\nUser styling without Stylish\n\nInstead of using the Stylish extension, you can add rules to the userContent.css file, or use @import in that file to load a separate stylesheet. You can find this is in /Library/Application Support/Camino/chrome/ on OS X, or C/Program Files/Mozilla Firefox/Chrome on Windows. This is only way to apply user styles in Camino, but what about other browsers?\n\nOpera & Omniweb: \n\nBoth allow you to specify a custom CSS file as part of the site\u2019s preferences. Opera also allows custom javascript, using the same syntax as Greasemonkey scripts (more on that below)\n\nSafari\n\nThere are a few options here: the PithHelmet and SafariStand haxies both allow custom stylesheets, or alternatively, a Greasemonkey style user script can employed via GreaseKit. The latter is my favoured solution on my Helvetireader theme, as it can allow for more prescriptive domain rules, just like the Mozilla @-moz-document method. User scripts are also the solution supported by the widest range of browsers.\n\nWhat now?\n\nHopefully I\u2019ve given you enough information for you to be able start making your own styles. If you want to go straight in and tackle the \u2018Holy Grail\u2019, then off with you to GMail \u2013 I get more requests to theme that than anything else!\n\nIf you\u2019re a site author and want to encourage this sort of tom foolery, a good way is to provide a unique class or ID name with the body tag:\n\n\n\nThis makes it very easy to write rules that only apply to that particular site. If you wanted to use Safari without any of the haxies mentioned above, this method means you can include rules in a general CSS file (chosen via Preferences > Advanced > Stylesheet) without affecting other sites. \n\nOne final revelation on user styling \u2013 it\u2019s not just for web sites. You can tweak the UI of Firefox itself with the userChrome.css. You\u2019ll need to use the in-built DOM Inspector instead of Firebug to inspect the window chrome, instead of a page. Great if you want to make small tweaks (changing the size of tab text for example) without creating a full blown theme.", "year": "2008", "author": "Jon Hicks", "author_slug": "jonhicks", "published": "2008-12-03T00:00:00+00:00", "url": "https://24ways.org/2008/user-styling/", "topic": "process"} {"rowid": 288, "title": "Displaying Icons with Fonts and Data- Attributes", "contents": "Traditionally, bitmap formats such as PNG have been the standard way of delivering iconography on websites. They\u2019re quick and easy, and it also ensures they\u2019re as pixel crisp as possible. Bitmaps have two drawbacks, however: multiple HTTP requests, affecting the page\u2019s loading performance; and a lack of scalability, noticeable when the page is zoomed or viewed on a screen with a high pixel density, such as the iPhone 4 and 4S.\n\nThe requests problem is normally solved by using CSS sprites, combining the icon set into one (physically) large image file and showing the relevant portion via background-position. While this works well, it can get a bit fiddly to specify all the positions. In particular, scalability is still an issue. A vector-based format such as SVG sounds ideal to solve this, but browser support is still patchy.\n\n\n\nThe rise and adoption of web fonts have given us another alternative. By their very nature, they\u2019re not only scalable, but resolution-independent too. No need to specify higher resolution graphics for high resolution screens! \n\nThat\u2019s not all though:\n\n\n\tBrowser support: Unlike a lot of new shiny techniques, they have been supported by Internet Explorer since version 4, and, of course, by all modern browsers. We do need several different formats, however!\n\tDesign on the fly: The font contains the basic graphic, which can then be coloured easily with CSS \u2013 changing colours for themes or :hover and :focus styles is done with one line of CSS, rather than requiring a new graphic. You can also use CSS3 properties such as text-shadow to add further effects. Using -webkit-background-clip: text;, it\u2019s possible to use gradient and inset shadow effects, although this creates a bitmap mask which spoils the scalability.\n\tSmall file size: specially designed icon fonts, such as Drew Wilson\u2019s Pictos font, can be as little as 12Kb for the .woff font. This is because they contain fewer characters than a fully fledged font. You can see Pictos being used in the wild on sites like Garrett Murray\u2019s Maniacal Rage.\n\n\nAs with all formats though, it\u2019s not without its disadvantages: \n\n\n\tIcons can only be rendered in monochrome or with a gradient fill in browsers that are capable of rendering CSS3 gradients. Specific parts of the icon can\u2019t be a different colour.\n\tIt\u2019s only appropriate when there is an accompanying text to provide meaning. This can be alleviated by wrapping the text label in a tag (I like to use rather than , due to the fact that it\u2019s smaller and isn\u2019 t being used elsewhere) and then hiding it from view with text-indent:-999em.\n\tCreating an icon font can be a complex and time-consuming process. While font editors can carry out hinting automatically, the best results are achieved manually.\n\tUnless you\u2019re adept at creating your own fonts, you\u2019re restricted to what is available in the font. However, fonts like Pictos will cover the most common needs, and icons are most effective when they\u2019re using familiar conventions.\n\n\nThe main complaint about using fonts for icons is that it can mean adding a meaningless character to our markup. The good news is that we can overcome this by using one of two methods \u2013 CSS generated content or the data-icon attribute \u2013 in combination with the :before and :after pseudo-selectors, to keep our markup minimal and meaningful. \n\nOur simple markup looks like this:\n\nView Basket\n\nNote the multiple class attributes. Next, we\u2019ll import the Pictos font using the @font-face web fonts property in CSS:\n\n@font-face {\n font-family: 'Pictos';\n src: url('pictos-web.eot');\n src: local('\u263a'), \n url('pictos-web.woff') format('woff'), \n url('pictos-web.ttf') format('truetype'),\n url('pictos-web.svg#webfontIyfZbseF') format('svg');\n}\n\nThis rather complicated looking set of rules is (at the time of writing) the most bulletproof way of ensuring as many browsers as possible load the font we want. We\u2019ll now use the content property applied to the :before pseudo-class selector to generate our icon. Once again, we\u2019ll use those multiple class attribute values to set common icon styles, then specific styles for .basket. This helps us avoid repeating styles:\n\n.icon {\n font-family: 'Pictos';\n font-size: 22px:\n}\n\n.basket:before {\n content: \"$\";\n}\n\nWhat does the :before pseudo-class do? It generates the dollar character in a browser, even when it\u2019s not present in the markup. Using the generated content approach means our markup stays simple, but we\u2019ll need a new line of CSS, defining what letter to apply to each class attribute for every icon we add.\n\ndata-icon is a new alternative approach that uses the HTML5 data- attribute in combination with CSS attribute selectors. This new attribute lets us add our own metadata to elements, as long as its prefixed by data- and doesn\u2019t contain any uppercase letters. In this case, we want to use it to provide the letter value for the icon. Look closely at this markup and you\u2019ll see the data-icon attribute.\n\nView Basket\n\n\n\nWe could add others, in fact as many as we like.\n\nFavourites\nHistory\nLocation\n\n\n\nThen, we need just one CSS attribute selector to style all our icons in one go:\n\n.icon:before {\n content: attr(data-icon);\n /* Insert your fancy colours here */\n }\n\nBy placing our custom attribute data-icon in the selector in this way, we can enable CSS to read the value of that attribute and display it before the element (in this case, the anchor tag). It saves writing a lot of CSS rules. I can imagine that some may not like the extra attribute, but it does keep it out of the actual content \u2013 generated or not.\n\n\n\n\n\nThis could be used for all manner of tasks, including a media player and large simple illustrations. See the demo for live examples. Go ahead and zoom the page, and the icons will be crisp, with the exception of the examples that use -webkit-background-clip: text as mentioned earlier.\n\nFinally, it\u2019s worth pointing out that with both generated content and the data-icon method, the letter will be announced to people using screen readers. For example, with the shopping basket icon above, the reader will say \u201cdollar sign view basket\u201d. As accessibility issues go, it\u2019s not exactly the worst, but could be confusing. You would need to decide whether this method is appropriate for the audience. Despite the disadvantages, icon fonts have huge potential.", "year": "2011", "author": "Jon Hicks", "author_slug": "jonhicks", "published": "2011-12-12T00:00:00+00:00", "url": "https://24ways.org/2011/displaying-icons-with-fonts-and-data-attributes/", "topic": "code"} {"rowid": 324, "title": "Debugging CSS with the DOM Inspector", "contents": "An Inspector Calls\n\nThe larger your site and your CSS becomes, the more likely that you will run into bizarre, inexplicable problems. Why does that heading have all that extra padding? Why is my text the wrong colour? Why does my navigation have a large moose dressed as Noel Coward on top of all the links? \n\nPerhaps you work in a collaborative environment, where developers and other designers are adding code? In which case, the likelihood of CSS strangeness is higher.\n\nYou need to debug. You need Firefox\u2019s wise-guy know-it-all, the DOM Inspector. \n\nThe DOM Inspector knows where everything is in your layout, and more importantly, what causes it to look the way it does. So without further ado, load up any css based site in your copy of Firefox (or Flock for that matter), and launch the DOM Inspector from the Tools menu.\n\nThe inspector uses two main panels \u2013 the left to show the DOM tree of the page, and the right to show you detail:\n\n\n\nThe Inspector will look at whatever site is in the front-most window or tab, but you can also use it without another window. Type in a URL at the top (A), press \u2018Inspect\u2019 (B) and a third panel appears at the bottom, with the browser view. I find this layout handier than looking at a window behind the DOM Inspector.\n\nStep 1 \u2013 find your node!\n\nEach element on your page \u2013 be it a HTML tag or a piece of text, is called a \u2018node\u2019 of the DOM tree. These nodes are all listed in the left hand panel, with any ID or CLASS attribute values next to them. When you first look at a page, you won\u2019t see all those yet. Nested HTML elements (such as a link inside a paragraph) have a reveal triangle next to their name, clicking this takes you one level further down. \n\nThis can be fine for finding the node you want to look at, but there are easier ways. Say you have a complex rounded box technique that involves 6 nested DIVs? You\u2019d soon get tired of clicking all those triangles to find the element you want to inspect. Click the top left icon \u00a9 \u2013 \u201cFind a node to inspect by clicking on it\u201d and then select the area you want to inspect. Boom! All that drilling down the DOM tree has been done for you! Huzzah!\n\nIf you\u2019re looking for an element that you know has an ID (such as