{"rowid": 30, "title": "Making Sites More Responsive, Responsibly", "contents": "With digital projects we\u2019re used to shifting our thinking to align with our target audience. We may undertake research, create personas, identify key tasks, or observe usage patterns, with our findings helping to refine our ongoing creations.\u00a0A product\u2019s overall experience can make or break its success, and when it comes to defining these experiences our development choices play a huge role alongside more traditional user-focused activities.\n\nThe popularisation of responsive web design is a great example of how we are able to shape the web\u2019s direction through using technology to provide better experiences. If we think back to the move from table-based layouts to CSS, initially our clients often didn\u2019t know or care about the difference in these approaches, but\u00a0we\u00a0did. Responsive design was similar in this respect \u2013 momentum grew through the web industry choosing to use an approach that we felt would give a better experience, and which was more future-friendly.\u00a0\n\nWe tend to think of responsive design as a means of displaying content appropriately across a range of devices, but the technology and our implementation of it can facilitate much more. A responsive layout not only helps your content work when the newest smartphone comes out, but it also ensures your layout suitably adapts if a visually impaired user drastically changes the size of the text.\n\n The 24 ways site at 400% on a Retina MacBook Pro displays a layout more typically used for small screens.\n\nWhen we think more broadly, we realise that our technical choices and approaches to implementation can have knock-on effects for the greater good, and beyond our initial target audiences. We can make our experiences more\u00a0responsive to people\u2019s needs, enhancing their usability and accessibility along the way.\n\nBeing responsibly responsive\n\nOf course, when we think about being more responsive, there\u2019s a fine line between creating useful functionality and becoming intrusive and overly complex. In the excellent Responsible Responsive Design, Scott Jehl states that:\n\n\nA responsible responsive design equally considers the following throughout a project:\n\nUsability: The way a website\u2019s user interface is presented to the user, and how that UI responds to browsing conditions and user interactions.\nAccess: The ability for users of all devices, browsers, and assistive technologies to access and understand a site\u2019s features and content.\nSustainability: The ability for the technology driving a site or application to work for devices that exist today and to continue to be usable and accessible to users, devices, and browsers in the future.\nPerformance: The speed at which a site\u2019s features and content are perceived to be delivered to the user and the efficiency with which they operate within the user interface.\n\n\n\nScott\u2019s book covers these ideas in a lot more detail than I\u2019ll be able to here (put it on your Christmas list if it\u2019s not there already), but for now let\u2019s think a bit more about our roles as digital creators\u00a0and the power this gives us.\n\nOur choices around technology and the decisions we have to make can be extremely wide-ranging. Solutions will vary hugely depending on the needs of each project, though we can further explore the concept of making our creations more responsive through the use of humble web technologies.\n\nThe power of the web\n\nWe all know that under the HTML5 umbrella are some great new capabilities, including a number of JavaScript APIs such as geolocation, web audio, the file API and many more. We often use these to enhance the functionality of our sites and apps, to add in new features, or to facilitate device-specific interactions.\n\nYou\u2019ll have seen articles with flashy titles such as \u201cTop 5 JavaScript APIs You\u2019ve Never Heard Of!\u201d, which you\u2019ll probably read, think \u201cThat\u2019s quite cool\u201d, yet never use in any real work.\n\nThere is great potential for technologies like these\u00a0to be misused, but there are also great prospects for them to be used well to enhance experiences. Let\u2019s have a look at a few\u00a0examples you may not have considered.\n\nOffline first\n\nWhen we make websites, many of us follow a process which involves user stories \u2013 standardised snippets of context explaining who needs what, and why.\n\n\u201cAs a student I want to pay online for my course so I don\u2019t have to visit the college in person.\u201d\n\n\u201cAs a retailer I want to generate unique product codes so I can manage my stock.\u201d\n\nWe very often focus heavily on what\u00a0needs doing, but may not consider carefully how it will be done. As in Scott\u2019s list, accessibility is extremely important, not only in terms of providing a great experience to users of assistive technologies, but also to make your creation more accessible in the general sense \u2013 including under different conditions.\n\nOffline first is yet another \u2018first\u2019 methodology (my personal favourite being \u2018tea first\u2019), which encourages us to develop so that connectivity\u00a0itself is an enhancement \u2013 letting\u00a0users continue with tasks even when they\u2019re offline. Despite the rapid growth in public Wi-Fi, if we consider data costs and connectivity in developing countries, our travel habits with planes, underground trains and roaming (or simply if you live in the UK\u2019s signal-barren East Anglian wilderness as I do), then you\u2019ll realise that connectivity isn\u2019t as ubiquitous as our internet-addled brains would make us believe. Take a scenario that I\u2019m sure we\u2019re all familiar with \u2013 the digital conference. Your venue may be in a city served by high-speed networks, but after overloading capacity with a full house of hashtag-hungry attendees, each carrying several devices, then everyone\u2019s likely to be offline after all. Wouldn\u2019t it be better if we could do something like this instead?\n\n\n\tSomeone visits our conference website.\n\tOn this initial run, some assets may be cached for future use: the conference schedule, the site\u2019s CSS, photos of the speakers.\n\tWhen the attendee revisits the site on the day, the page shell loads up from the cache.\n\tIf we have cached content (our session timetable, speaker photos or anything else), we can load it directly from the cache. We might then try to update this, or get some new content from the internet, but the conference attendee already has a base experience to use.\n\tIf we don\u2019t have something cached already, then we can try\u00a0grabbing it online.\n\tIf for any reason our requests for new content fail (we\u2019re offline), then we can display a pre-cached error message from the initial load, perhaps providing our users with alternative suggestions from what is\u00a0cached.\n\n\nThere are a number of ways we can make something like this, including using the application cache (AppCache) if you\u2019re that way inclined. However, you may want to look into service workers\u00a0instead. There are also some great resources on Offline First!\u00a0if you\u2019d like to find out more about this.\n\nBuilding in offline functionality isn\u2019t necessarily about starting offline first, and it\u2019s also perfectly possible to retrofit sites and apps to catch offline scenarios, but this kind of graceful degradation can end up being more complex than if we\u2019d considered it from the start. By treating connectivity as an enhancement, we can improve the experience and provide better performance than we can when waiting to counter failures. Our websites can respond to connectivity and usage scenarios, on top of adapting how we present our content. Thinking in this way can enhance each point in Scott\u2019s criteria.\n\nAs I mentioned, this isn\u2019t necessarily the kind of development choice that our clients will ask us for, but it\u2019s one we may decide is simply the right way to build based on our project, enhancing the experience we provide to people, and making it more responsive to their situation.\n\nEven more accessible\n\nWe\u2019ve looked at accessibility in terms of broadening when we can interact with a website, but what about how? Our user stories and personas are often of limited use. We refer in very general terms to students, retailers, and sometimes just users. What if we have a student whose needs are very different from another student? Can we make our sites even more usable and accessible through our development choices?\n\nAgain using JavaScript to illustrate this concept, we can do a lot more with the ways people interact with our websites, and with the feedback we provide, than simply accepting keyboard, mouse and touch inputs and displaying output on a screen.\n\nInput\n\nAmbient light detection is one of those features that looks great in simple demos, but which we struggle to put to practical use. It\u2019s not new \u2013 many satnav systems automatically change the contrast for driving at night or in tunnels, and our laptops may alter the screen brightness or keyboard backlighting to better adapt to our surroundings. Using web technologies we can adapt our presentation to be better suited to ambient light levels.\n\nIf our device has an appropriate light sensor and runs a browser that supports the API, we can grab the ambient light in units using ambient light events, in JavaScript. We may then change our presentation based on different bandings, perhaps like this:\n\nwindow.addEventListener('devicelight', function(e) {\n var lux = e.value;\n\n if (lux < 50) {\n //Change things for dim light\n }\n if (lux >= 50 && lux <= 10000) {\n //Change things for normal light\n }\n if (lux > 10000) {\n //Change things for bright light\n }\n});\n\nLive demo\u00a0(requires light sensor and supported browser).\n\nSoon we may also be able to do such detection through CSS, with light-level being cited in the Media Queries Level 4 specification. If that becomes the case, it\u2019ll probably look something like this:\n\n@media (light-level: dim) {\n /*Change things for dim light*/\n}\n\n@media (light-level: normal) {\n /*Change things for normal light*/\n}\n\n@media (light-level: washed) {\n /*Change things for bright light*/\n}\n\nWhile we may be quick to dismiss this kind of detection as being a gimmick, it\u2019s important to consider that apps such as Light Detector, listed on Apple\u2019s accessibility page, provide important context around exactly this functionality.\n\n\n\t\u201cIf you are blind, Light Detector helps you to be more independent in many daily activities. At home, point your iPhone towards the ceiling to understand where the light fixtures are and whether they are switched on. In a room, move the device along the wall to check if there is a window and where it is. You can find out whether the shades are drawn by moving the device up and down.\u201d\n\n\teverywaretechnologies.com/apps/lightdetector\n\n\nInput can be about so much more than what we enter through keyboards. Both an ever increasing amount of available sensors and more APIs being supported by the major browsers will allow us to cater for more scenarios and respond to them accordingly. This can be as complex or simple as you need; for instance, while x-webkit-speech has been deprecated, the web speech API is available for a number of browsers, and research into sign language detection is also being performed by organisations such as Microsoft.\n\nOutput\n\nWeb technologies give us some great enhancements around input, allowing us to adapt our experiences accordingly. They also provide us with some nice ways to provide feedback to users.\n\nWhen we play video games, many of our modern consoles come with the ability to have rumble effects on our controller pads. These are a great example of an enhancement, as they provide a level of feedback that is entirely optional, but which can give a great deal of extra information to the player in the right circumstances, and broaden the scope of our comprehension beyond what we\u2019re seeing and hearing.\n\nHaptic feedback is possible on the web as well. We could use this in any number of responsible applications, such as alerting a user to changes or using different patterns as a communication mechanism. If you find yourself in a pickle, here\u2019s how to print out SOS in Morse code through the vibration API. The following code indicates the length of vibration in milliseconds, interspersed by pauses in milliseconds.\n\nnavigator.vibrate([100, 300, 100, 300, 100, 300, 600, 300, 600, 300, 600, 300, 100, 300, 100, 300, 100]);\n\nLive demo\u00a0(requires supported browser)\n\nWith great power\u2026\n\nWhat you\u2019ve no doubt come to realise by now is that these are just more examples of progressive enhancement, whose inclusion will provide a better experience if the capabilities are available, but which we should not rely on. This idea isn\u2019t new, but the most important thing to remember, and what I would like you to take away from this article, is that it is up to us to decide to include these kind of approaches within our projects \u2013 if we don\u2019t root for them, they probably won\u2019t happen. This is where our professional responsibility comes in.\n\nWe won\u2019t necessarily be asked to implement solutions for the scenarios above, but they illustrate how we can help to push the boundaries of experiences. Maybe we\u2019ll have to switch our thinking about how we build, but we can create more usable products for a diverse range of people and usage scenarios through the choices we make around technology. Let\u2019s stop thinking simply in terms of features inside a narrow view of our target users, and work out how we can extend these to cater for a wider set of situations.\n\nWhen you plan your next digital project, consider the power of the web and the enhancements we can use, and try to make your projects even more responsive and responsible.", "year": "2014", "author": "Sally Jenkinson", "author_slug": "sallyjenkinson", "published": "2014-12-10T00:00:00+00:00", "url": "https://24ways.org/2014/making-sites-more-responsive-responsibly/", "topic": "code"} {"rowid": 31, "title": "Dealing with Emergencies in Git", "contents": "The stockings were hung by the chimney with care,\nIn hopes that version control soon would be there.\n\nThis summer I moved to the UK with my partner, and the onslaught of the Christmas holiday season began around the end of October (October!). It does mean that I\u2019ve had more than a fair amount of time to come up with horrible Git analogies for this article. Analogies, metaphors, and comparisons help the learner hook into existing mental models about how a system works. They only help, however, if the learner has enough familiarity with the topic at hand to make the connection between the old and new information.\n\nLet\u2019s start by painting an updated version of Clement Clarke Moore\u2019s Christmas living room. Empty stockings are hung up next to the fireplace, waiting for Saint Nicholas to come down the chimney and fill them with small treats. Holiday treats are scattered about. A bowl of mixed nuts, the holiday nutcracker, and a few clementines. A string of coloured lights winds its way up an evergreen.\n\nPerhaps a few of these images are familiar, or maybe they\u2019re just settings you\u2019ve seen in a movie. It doesn\u2019t really matter what the living room looks like though. The important thing is to ground yourself in your own experiences before tackling a new subject. Instead of trying to brute-force your way into new information, as an adult learner constantly ask yourself: \u2018What is this like? What does this remind me of? What do I already know that I can use to map out this new territory?\u2019 It\u2019s okay if the map isn\u2019t perfect. As you refine your understanding of a new topic, you\u2019ll outgrow the initial metaphors, analogies, and comparisons.\n\nWith apologies to Mr. Moore, let\u2019s give it a try.\n\nGetting Interrupted in Git\n\nWhen on the roof there arose such a clatter!\n\nYou\u2019re happily working on your software project when all of a sudden there are freaking reindeer on the roof! Whatever you\u2019ve been working on is going to need to wait while you investigate the commotion.\n\nIf you\u2019ve got even a little bit of experience working with Git, you know that you cannot simply change what you\u2019re working on in times of emergency. If you\u2019ve been doing work, you have a dirty working directory and you cannot change branches, or push your work to a remote repository while in this state.\n\nUp to this point, you\u2019ve probably dealt with emergencies by making a somewhat useless commit with a message something to the effect of \u2018switching branches for a sec\u2019. This isn\u2019t exactly helpful to future you, as commits should really contain whole ideas of completed work. If you get interrupted, especially if there are reindeer on the roof, the chances are very high that you weren\u2019t finished with what you were working on.\n\nYou don\u2019t need to make useless commits though. Instead, you can use the stash command. This command allows you to temporarily set aside all of your changes so that you can come back to them later. In this sense, stash is like setting your book down on the side table (or pushing the cat off your lap) so you can go investigate the noise on the roof. You aren\u2019t putting your book away though, you\u2019re just putting it down for a moment so you can come back and find it exactly the way it was when you put it down.\n\nLet\u2019s say you\u2019ve been working in the branch waiting-for-st-nicholas, and now you need to temporarily set aside your changes to see what the noise was on the roof:\n\n$ git stash\n\nAfter running this command, all uncommitted work will be temporarily removed from your working directory, and you will be returned to whatever state you were in the last time you committed your work.\n\nWith the book safely on the side table, and the cat safely off your lap, you are now free to investigate the noise on the roof. It turns out it\u2019s not reindeer after all, but just your boss who thought they\u2019d help out by writing some code on the project you\u2019ve been working on. Bless. Rolling your eyes, you agree to take a look and see what kind of mischief your boss has gotten themselves into this time.\n\nYou fetch an updated list of branches from the remote repository, locate the branch your boss had been working on, and checkout a local copy:\n\n$ git fetch\n$ git branch -r\n$ git checkout -b helpful-boss-branch origin/helpful-boss-branch\n\nYou are now in a local copy of the branch where you are free to look around, and figure out exactly what\u2019s going on.\n\nYou sigh audibly and say, \u2018Okay. Tell me what was happening when you first realised you\u2019d gotten into a mess\u2019 as you look through the log messages for the branch.\n\n$ git log --oneline\n$ git log\n\nBy using the log command you will be able to review the history of the branch and find out the moment right before your boss ended up stuck on your roof.\n\nYou may also want to compare the work your boss has done to the main branch for your project. For this article, we\u2019ll assume the main branch is named master.\n\n$ git diff master\n\nLooking through the commits, you may be able to see that things started out okay but then took a turn for the worse.\n\nChecking out a single commit\n\nUsing commands you\u2019re already familiar with, you can rewind through history and take a look at the state of the code at any moment in time by checking out a single commit, just like you would a branch.\n\nUsing the log command, locate the unique identifier (commit hash) of the commit you want to investigate. For example, let\u2019s say the unique identifier you want to checkout is 25f6d7f.\n\n$ git checkout 25f6d7f\n\nNote: checking out '25f6d7f'.\n\nYou are in 'detached HEAD' state. You can look around,\nmake experimental changes and commit them, and you can\ndiscard any commits you make in this state without\nimpacting any branches by performing another checkout.\n\nIf you want to create a new branch to retain commits you create, you may do so (now or later) by using @-b@ with the checkout command again. Example:\n\n$ git checkout -b new_branch_name\n\nHEAD is now at 25f6d7f... Removed first paragraph.\n\nThis is usually where people start to panic. Your boss screwed something up, and now your HEAD is detached. Under normal circumstances, these words would be a very good reason to panic.\n\nTake a deep breath. Nothing bad is going to happen. Being in a detached HEAD state just means you\u2019ve temporarily disconnected from a known chain of events. In other words, you\u2019re currently looking at the middle of a story (or branch) about what happened \u2013 and you\u2019re not at the endpoint for this particular story.\n\nGit allows you to view the history of your repository as a timeline (technically it\u2019s a directed acyclic graph). When you make commits which are not associated with a branch, they are essentially inaccessible once you return to a known branch. If you make commits while you\u2019re in a detached HEAD state, and then try to return to a known branch, Git will give you a warning and tell you how to save your work.\n\n$ git checkout master\n\nWarning: you are leaving 1 commit behind, not connected to\nany of your branches:\n\n 7a85788 Your witty holiday commit message.\n\nIf you want to keep them by creating a new branch, this may be a good time to do so with:\n\n$ git branch new_branch_name 7a85788\n\nSwitched to branch 'master'\nYour branch is up-to-date with 'origin/master'.\n\nSo, if you want to save the commits you\u2019ve made while in a detached HEAD state, you simply need to put them on a new branch.\n\n$ git branch saved-headless-commits 7a85788\n\nWith this trick under your belt, you can jingle around in history as much as you\u2019d like. It\u2019s not like sliding around on a timeline though. When you checkout a specific commit, you will only have access to the history from that point backwards in time. If you want to move forward in history, you\u2019ll need to move back to the branch tip by checking out the branch again.\n\n$ git checkout helpful-boss-branch\n\nYou\u2019re now back to the present. Your HEAD is now pointing to the endpoint of a known branch, and so it is no longer detached. Any changes you made while on your adventure are safely stored in a new branch, assuming you\u2019ve followed the instructions Git gave you. That wasn\u2019t so scary after all, now, was it?\n\nBack to our reindeer problem.\n\nIf your boss is anything like the bosses I\u2019ve worked with, chances are very good that at least some of their work is worth salvaging. Depending on how your repository is structured, you\u2019ll want to capture the good work using one of several different methods.\n\nBack in the living room, we\u2019ll use our bowl of nuts to illustrate how you can rescue a tiny bit of work.\n\nSaving just one commit\n\nAbout that bowl of nuts. If you\u2019re like me, you probably had some favourite kinds of nuts from an assorted collection. Walnuts were generally the most satisfying to crack open. So, instead of taking the entire bowl of nuts and dumping it into a stocking (merging the stocking and the bowl of nuts), we\u2019re just going to pick out one nut from the bowl. In Git terms, we\u2019re going to cherry-pick a commit and save it to another branch.\n\nFirst, checkout the main branch for your development work. From this branch, create a new branch where you can copy the changes into.\n\n$ git checkout master\n$ git checkout -b rescue-the-boss\n\nFrom your boss\u2019s branch, helpful-boss-branch locate the commit you want to keep.\n\n$ git log --oneline helpful-boss-branch\n\nLet\u2019s say the commit ID you want to keep is e08740b. From your rescue branch, use the command cherry-pick to copy the changes into your current branch.\n\n$ git cherry-pick e08740b\n\nIf you review the history of your current branch again, you will see you now also have the changes made in the commit in your boss\u2019s branch.\n\nAt this point you might need to make a few additional fixes to help your boss out. (You\u2019re angling for a bonus out of all this. Go the extra mile.) Once you\u2019ve made your additional changes, you\u2019ll need to add that work to the branch as well.\n\n$ git add [filename(s)]\n$ git commit -m \"Building on boss's work to improve feature X.\"\n\nGo ahead and test everything, and make sure it\u2019s perfect. You don\u2019t want to introduce your own mistakes during the rescue mission!\n\nUploading the fixed branch\n\nThe next step is to upload the new branch to the remote repository so that your boss can download it and give you a huge bonus for helping you fix their branch.\n\n$ git push -u origin rescue-the-boss\n\nCleaning up and getting back to work\n\nWith your boss rescued, and your bonus secured, you can now delete the local temporary branches.\n\n$ git branch --delete rescue-the-boss\n$ git branch --delete helpful-boss-branch\n\nAnd settle back into your chair to wait for Saint Nicholas with your book, your branch, and possibly your cat.\n\n$ git checkout waiting-for-st-nicholas\n$ git stash pop\n\nYour working directory has been returned to exactly the same state you were in at the beginning of the article.\n\nHaving fun with analogies\n\nI\u2019ve had a bit of fun with analogies in this article. But sometimes those little twists on ideas can really help someone pick up a new idea (git stash: it\u2019s like when Christmas comes around and everyone throws their fashion sense out the window and puts on a reindeer sweater for the holiday party; or git bisect: it\u2019s like trying to find that one broken light on the string of Christmas lights). It doesn\u2019t matter if the analogy isn\u2019t perfect. It\u2019s just a way to give someone a temporary hook into a concept in a way that makes the concept accessible while the learner becomes comfortable with it. As the learner\u2019s comfort increases, the analogies can drop away, making room for the technically correct definition of how something works.\n\nOr, if you\u2019re like me, you can choose to never grow old and just keep mucking about in the analogies. I\u2019d argue it\u2019s a lot more fun to play with a string of Christmas lights and some holiday cheer than a directed acyclic graph anyway.", "year": "2014", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2014-12-02T00:00:00+00:00", "url": "https://24ways.org/2014/dealing-with-emergencies-in-git/", "topic": "code"} {"rowid": 36, "title": "Naming Things", "contents": "There are only two hard things in computer science: cache invalidation and naming things.\nPhil Karlton\n\n\nBeing a professional web developer means taking responsibility for the code you write and ensuring it is comprehensible to others. Having a documented code style is one means of achieving this, although the size and type of project you\u2019re working on will dictate the conventions used and how rigorously they are enforced.\n\nWorking in-house may mean working with multiple developers, perhaps in distributed teams, who are all committing changes \u2013 possibly to a significant codebase \u2013 at the same time. Left unchecked, this codebase can become unwieldy. Coding conventions ensure everyone can contribute, and help build a product that works as a coherent whole.\n\nEven on smaller projects, perhaps working within an agency or by yourself, at some point the resulting product will need to be handed over to a third party. It\u2019s sensible, therefore, to ensure that your code can be understood by those who\u2019ll eventually take ownership of it.\n\nPut simply, code is read more often than it is written or changed. A consistent and predictable naming scheme can make code easier for other developers to understand, improve and maintain, presumably leaving them free to worry about cache invalidation.\n\nLet\u2019s talk about semantics\n\nNames not only allow us to identify objects, but they can also help us describe the objects being identified.\n\nSemantics (the meaning or interpretation of words) is the cornerstone of standards-based web development. Using appropriate HTML elements allows us to create documents and applications that have implicit structural meaning. Thanks to HTML5, the vocabulary we can choose from has grown even larger.\n\nHTML elements provide one level of meaning: a widely accepted description of a document\u2019s underlying structure. It\u2019s only with the mutual agreement of browser vendors and developers that

indicates a paragraph.\n\nYet (with the exception of widely accepted microdata and microformat schemas) only HTML elements convey any meaning that can be parsed consistently by user agents. While using semantic values for class names is a noble endeavour, they provide no additional information to the visitor of a website; take them away and a document will have exactly the same semantic value.\n\nI didn\u2019t always think this was the case, but the real world has a habit of changing your opinion. Much of my thinking around semantics has been informed by the writing of my peers. In \u201cAbout HTML semantics and front-end architecture\u201d, Nicholas Gallagher wrote:\n\n\n\tThe important thing for class name semantics in non-trivial applications is that they be driven by pragmatism and best serve their primary purpose \u2013 providing meaningful, flexible, and reusable presentational/behavioural hooks for developers to use.\n\n\nThese thoughts are echoed by Harry Roberts in his CSS Guidelines:\n\n\n\tThe debate surrounding semantics has raged for years, but it is important that we adopt a more pragmatic, sensible approach to naming things in order to work more efficiently and effectively. Instead of focussing on \u2018semantics\u2019, look more closely at sensibility and longevity \u2013 choose names based on ease of maintenance, not for their perceived meaning.\n\n\nNaming methodologies\n\nFront-end development has undergone a revolution in recent years. As the projects we\u2019ve worked on have grown larger and more important, our development practices have matured. The pros and cons of object-orientated approaches to CSS can be endlessly debated, yet their introduction has highlighted the usefulness of having documented naming schemes.\n\nJonathan Snook\u2019s SMACSS (Scalable and Modular Architecture for CSS) collects style rules into five categories: base, layout, module, state and theme. This grouping makes it clear what each rule does, and is aided by a naming convention:\n\n\n\tBy separating rules into the five categories, naming convention is beneficial for immediately understanding which category a particular style belongs to and its role within the overall scope of the page. On large projects, it is more likely to have styles broken up across multiple files. In these cases, naming convention also makes it easier to find which file a style belongs to.\n\n\tI like to use a prefix to differentiate between layout, state and module rules. For layout, I use l- but layout- would work just as well. Using prefixes like grid- also provide enough clarity to separate layout styles from other styles. For state rules, I like is- as in is-hidden or is-collapsed. This helps describe things in a very readable way.\n\n\nSMACSS is more a set of suggestions than a rigid framework, so its ideas can be incorporated into your own practice. Nicholas Gallagher\u2019s SUIT CSS project is far more strict in its naming conventions:\n\n\n\tSUIT CSS relies on structured class names and meaningful hyphens (i.e., not using hyphens merely to separate words). This helps to work around the current limits of applying CSS to the DOM (i.e., the lack of style encapsulation), and to better communicate the relationships between classes.\n\n\nOver the last year, I\u2019ve favoured a BEM-inspired approach to CSS. BEM stands for block, element, modifier, which describes the three types of rule that contribute to the style of a single component. This means that, given the following markup:\n\n

\n\nI know that:\n\n\n\t.sleigh is a containing block or component.\n\t.sleigh__reindeer is used only as a descendent element of .sleigh.\n\t.sleigh__reindeer\u2013\u2013famous is used only as a modifier of .sleigh__reindeer.\n\n\nWith this naming scheme in place, I know which styles relate to a particular component, and which are shared. Beyond reducing specificity-related head-scratching, this approach has given me a framework within which I can consistently label items, and has sped up my workflow considerably.\n\nEach of these methodologies shows that any robust CSS naming convention will have clear rules around case (lowercase, camelCase, PascalCase) and the use of special (allowed) characters like hyphens and underscores.\n\nWhat makes for a good name?\n\nRegardless of higher-level conventions, there\u2019s no getting away from the fact that, at some point, we\u2019re still going to have to name things. Recognising that classes should be named with other developers in mind, what makes for a good name?\n\nUnderstandable\n\nThe most important aspect is for a name to be understandable. Words used in your project may come from a variety of sources: some may be widely understood, and others only be recognised by people working within a particular environment.\n\n\n\tCulture\nMost words you\u2019ll choose will have common currency outside the world of web development, although they may have a particular interpretation among developers (think menu, list, input). However, words may have a narrower cultural significance; for example, in Germany and other German-speaking countries, impressum is the term used for legally mandated statements of ownership.\n\tIndustry\nIndustries often use specific terms to describe common business practices and concepts. Publishing has a number of these (headline, standfirst, masthead, colophon\u2026) all have well understood meanings \u2013 and not all of them are relevant to online usage.\n\tOrganisation\nCompanies may have internal names (or nicknames) for their products and services. The Guardian is rife with such names: bisons (and buffalos), pixies (and super-pixies), bentos (and mini-bentos)\u2026 all of which mean something very different outside the organisation. Although such names can be useful inside smaller teams, in larger organisations they can become a barrier to entry, a sort of secret code used among employees who have been around long enough to know what they mean.\n\tProduct\nYour team will undoubtedly have created names for specific features or interface components used in your product. For example, at Clearleft we coined the term gravigation for a navigation bar that was pinned to the bottom of the viewport. Elements of a visual design language may have names, too. Transport for London\u2019s bar and circle logo is known internally as the roundel, while Nike\u2019s logo is called the swoosh. Branding agencies often christen colours within a brand palette, too, either to evoke aspects of the identity or to indicate intended usage.\n\n\nOnce you recognise the origin of the words you use, you\u2019ll be better able to judge their appropriateness. Using Latin words for class names may satisfy a need to use semantic-sounding terms but, unless you work in a company whose employees have a basic grasp of Latin, a degree of translation will be required. Military ranks might be a clever way of declaring sizes without implying actual values, but I\u2019d venture most people outside the armed forces don\u2019t know how they\u2019re ordered.\n\nObvious\n\nQuite often, the first name that comes into your head will be the best option. Names that obliquely reference the function of a class (e.g. receptacle instead of container, kevlar instead of no-bullets) only serve to add an additional layer of abstraction. Don\u2019t overthink it!\n\nOne way of knowing if the names you use are well understood is to look at what similar concepts are called in existing vocabularies. schema.org, Dublin Core and the BBC\u2019s ontologies are all useful sources for object names.\n\nFunctional\n\nWhile we\u2019ve learned to avoid using presentational classes, there remains a tension between naming things based on their content, and naming them for their intended presentation or behaviour (which may change at different breakpoints). Rather than think about a component\u2019s appearance or behaviour, instead look to its function, its purpose. To clarify, ask what a component\u2019s function is, and not how the component functions.\n\nFor example, the Guardian\u2019s internal content system uses the following names for different types of image placement: supporting, showcase and thumbnail, with inline being the default. These options make no promise of the resulting position on a webpage (or smartphone app, or television screen\u2026), but do suggest intended use, and therefore imply the likely presentation.\n\nConsistent\n\nBeing consistent in your approach to names will allow for easier naming of successive components, and extending the vocabulary when necessary. For example, a predictably named hierarchy might use names like primary and secondary. Should another level need to be added, tertiary is clearly be preferred over third.\n\nAppropriate\n\nYour project will feature a mix of style rules. Some will perform utility functions (clearing floats, removing bullets from a list, reseting margins), while others will perform specific functions used only once or twice in a project. Names should reflect this. For commonly used classes, be generic; for unique components be more specific.\n\nIt\u2019s also worth remembering that you can use multiple classes on an element, so combining both generic and specific can give you a powerful modular design system:\n\n\n\tGeneric: list\n\tSpecific: naughty-children\n\tCombined: naughty-children list\n\n\nIf following the BEM methodology, you might use the following classes:\n\n\n\tGeneric: list\n\tSpecific: list\u2013\u2013nice-children\n\tCombined: list list\u2013\u2013nice-children\n\n\nExtensible\n\nGood naming schemes can be extended. One way of achieving this is to use namespaces, which are basically a way of grouping related names under a higher-level term.\n\nMicroformats are a good example of a well-designed naming scheme, with many of its vocabularies taking property names from existing and related specifications (e.g. hCard is a 1:1 representation of vCard). Microformats 2 goes one step further by grouping properties under several namespaces:\n\n\n\th-* for root class names (e.g. h-card)\n\tp-* for simple (text) properties (e.g. p-name)\n\tu-* for URL properties (e.g. u-photo)\n\tdt-* for date/time properties (e.g. dt-bday)\n\te-* for embedded markup properties (e.g. e-note)\n\n\nThe inclusion of namespaces is a massive improvement over the earlier specification, but the downside is that microformats now occupy five separate namespaces. This might be problematic if you are using u-* for your utility classes. While nothing will break, your naming system won\u2019t be as robust, so plan accordingly.\n\n(Note: Microformats perform a very specific function, separate from any presentational concerns. It\u2019s therefore considered best practice to not use microformat classes as styling hooks, but instead use additional classes that relate to the function of the component and adhere to your own naming conventions.)\n\nShort\n\nNames should be as long as required, but no longer. When looking for words to describe a particular function, I try to look for single words where possible. Avoid abbreviations unless they are understood within the contexts described above. rrp is fine if labelling a recommended retail price in an online shop, but not very helpful if used to mean ragged-right paragraph, for example.\n\nFun!\n\nFinally, names can be an opportunity to have some fun! Names can give character to a project, be it by providing an outlet for in-jokes or adding little easter eggs for those inclined to look.\n\nThe copyright statement on Apple\u2019s website has long been named sosumi, a word that has a nice little history inside Apple. Until recently, the hamburger menu icon on the Guardian website was labelled honest-burger, after the developer\u2019s favourite burger restaurant.\n\nA few thoughts on preprocessors\n\nCSS preprocessors have solved a lot of problems, but they have an unfortunate downside: they require you to name yet more things! Whereas we needed to worry only about style rules, now we need names for variables, mixins, functions\u2026 oh my!\n\nA second article could be written about naming these, so for now I\u2019ll offer just a few thoughts. The first is to note that preprocessors make it easier to change things, as they allow for DRYer code. So while the names of variables are important (and the advice in this article still very much applies), you can afford to relax a little.\n\nLooking to name colour variables? If possible, find out if colours have been assigned names in a brand palette. If not, use obvious names (based on appearance or function, depending on your preference) and adapt as the palette grows. If it becomes difficult to name colours that are too similar, I\u2019d venture that the problem lies with the design rather than the naming scheme.\n\nThe same is true for responsive breakpoints. Preprocessors allow you to move awkward naming conventions out of the markup and into the CSS. Although terms like mobile, tablet and desktop are not desirable given the need to think about device-agnostic design, if these terms are widely understood within a product team and among stakeholders, using them will ensure everyone is using the same language (they can always be changed later).\n\nIt still feels like we\u2019re at the very beginning of understanding how preprocessors fit into a development workflow, if at all! I suspect over the next few years, best practices will emerge for all of these considerations. In the meantime, use your brain!\n\n\n\nEven with sensible rules and conventions in place, naming things can remain difficult, but hopefully I\u2019ve made this exercise a little less painful. Christmas is a time of giving, so to the developer reading your code in a year\u2019s time, why not make your gift one of clearer class names.", "year": "2014", "author": "Paul Lloyd", "author_slug": "paulrobertlloyd", "published": "2014-12-21T00:00:00+00:00", "url": "https://24ways.org/2014/naming-things/", "topic": "code"} {"rowid": 37, "title": "JavaScript Modules the ES6 Way", "contents": "JavaScript admittedly has plenty of flaws, but one of the largest and most prominent is the lack of a module system: a way to split up your application into a series of smaller files that can depend on each other to function correctly. \n\nThis is something nearly all other languages come with out of the box, whether it be Ruby\u2019s require, Python\u2019s import, or any other language you\u2019re familiar with. Even CSS has @import! JavaScript has nothing of that sort, and this has caused problems for application developers as they go from working with small websites to full client-side applications. Let\u2019s be clear: it doesn\u2019t mean the new module system in the upcoming version of JavaScript won\u2019t be useful to you if you\u2019re building smaller websites rather than the next Instagram.\n\nThankfully, the lack of a module system will soon be a problem of the past. The next version of JavaScript, ECMAScript 6, will bring with it a full-featured module and dependency management solution for JavaScript. The bad news is that it won\u2019t be landing in browsers for a while yet \u2013 but the good news is that the specification for the module system and how it will look has been finalised. The even better news is that there are tools available to get it all working in browsers today without too much hassle. In this post I\u2019d like to give you the gift of JS modules and show you the syntax, and how to use them in browsers today. It\u2019s much simpler than you might think.\n\nWhat is ES6?\n\nECMAScript is a scripting language that is standardised by a company called Ecma International. JavaScript is an implementation of ECMAScript. ECMAScript 6 is simply the next version of the ECMAScript standard and, hence, the next version of JavaScript. The spec aims to be fully comfirmed and complete by the end of 2014, with a target initial release date of June 2015. It\u2019s impossible to know when we will have full feature support across the most popular browsers, but already some ES6 features are landing in the latest builds of Chrome and Firefox. You shouldn\u2019t expect to be able to use the new features across browsers without some form of additional tooling or library for a while yet.\n\nThe ES6 module spec\n\nThe ES6 module spec was fully confirmed in July 2014, so all the syntax I will show you in this article is not expected to change. I\u2019ll first show you the syntax and the new APIs being added to the language, and then look at how to use them today. There are two parts to the new module system. The first is the syntax for declaring modules and dependencies in your JS files, and the second is a programmatic API for loading in modules manually. The first is what most people are expected to use most of the time, so it\u2019s what I\u2019ll focus on more.\n\nModule syntax\n\nThe key thing to understand here is that modules have two key components. First, they have dependencies. These are things that the module you are writing depends on to function correctly. For example, if you were building a carousel module that used jQuery, you would say that jQuery is a dependency of your carousel. You import these dependencies into your module, and we\u2019ll see how to do that in a minute. Second, modules have exports. These are the functions or variables that your module exposes publicly to anything that imports it. Using jQuery as the example again, you could say that jQuery exports the $ function. Modules that depend on and hence import jQuery get access to the $ function, because jQuery exports it.\n\nAnother important thing to note is that when I discuss a module, all I really mean is a JavaScript file. There\u2019s no extra syntax to use other than the new ES6 syntax. Once ES6 lands, modules and files will be analogous.\n\nNamed exports\n\nModules can export multiple objects, which can be either plain old variables or JavaScript functions. You denote something to be exported with the export keyword:\n\nexport function double(x) {\n return x + x;\n};\n\n\nYou can also store something in a variable then export it. If you do that, you have to wrap the variable in a set of curly braces.\n\nvar double = function(x) {\n return x + x;\n}\n\nexport { double };\n\nA module can then import the double function like so:\n\nimport { double } from 'mymodule';\ndouble(2); // 4\n\nAgain, curly braces are required around the variable you would like to import. It\u2019s also important to note that from 'mymodule' will look for a file called mymodule.js in the same directory as the file you are requesting the import from. There is no need to add the .js extension.\n\nThe reason for those extra braces is that this syntax lets you export multiple variables:\n\nvar double = function(x) {\n return x + x;\n}\n\nvar square = function(x) {\n return x * x;\n}\n\nexport { double, square }\n\nI personally prefer this syntax over the export function \u2026, but only because it makes it much clearer to me what the module exports. Typically I will have my export {\u2026} line at the bottom of the file, which means I can quickly look in one place to determine what the module is exporting.\n\nA file importing both double and square can do so in just the way you\u2019d expect:\n\nimport { double, square } from 'mymodule';\ndouble(2); // 4\nsquare(3); // 9\n\nWith this approach you can\u2019t easily import an entire module and all its methods. This is by design \u2013 it\u2019s much better and you\u2019re encouraged to import just the functions you need to use.\n\nDefault exports\n\nAlong with named exports, the system also lets a module have a default export. This is useful when you are working with a large library such as jQuery, Underscore, Backbone and others, and just want to import the entire library. A module can define its default export (it can only ever have one default export) like so:\n\nexport default function(x) {\n return x + x;\n}\n\nAnd that can be imported:\n\nimport double from 'mymodule';\ndouble(2); // 4\n\n\nThis time you do not use the curly braces around the name of the object you are importing. Also notice how you can name the import whatever you\u2019d like. Default exports are not named, so you can import them as anything you like:\n\nimport christmas from 'mymodule';\nchristmas(2); // 4\n\nThe above is entirely valid.\n\nAlthough it\u2019s not something that is used too often, a module can have both named exports and a default export, if you wish.\n\nOne of the design goals of the ES6 modules spec was to favour default exports. There are many reasons behind this, and there is a very detailed discussion on the ES Discuss site about it. That said, if you find yourself preferring named exports, that\u2019s fine, and you shouldn\u2019t change that to meet the preferences of those designing the spec.\n\nProgrammatic API\n\nAlong with the syntax above, there is also a new API being added to the language so you can programmatically import modules. It\u2019s pretty rare you would use this, but one obvious example is loading a module conditionally based on some variable or property. You could easily import a polyfill, for example, if the user\u2019s browser didn\u2019t support a feature your app relied on. An example of doing this is:\n\nif(someFeatureNotSupported) {\n System.import('my-polyfill').then(function(myPolyFill) {\n // use the module from here\n });\n}\n\nSystem.import will return a promise, which, if you\u2019re not familiar, you can read about in this excellent article on HTMl5 Rocks by Jake Archibald. A promise basically lets you attach callback functions that are run when the asynchronous operation (in this case, System.import), is complete.\n\nThis programmatic API opens up a lot of possibilities and will also provide hooks to allow you to register callbacks that will run at certain points in the lifetime of a module. Those hooks and that syntax are slightly less set in stone, but when they are confirmed they will provide really useful functionality. For example, you could write code that would run every module that you import through something like JSHint before importing it. In development that would provide you with an easy way to keep your code quality high without having to run a command line watch task.\n\nHow to use it today\n\nIt\u2019s all well and good having this new syntax, but right now it won\u2019t work in any browser \u2013 and it\u2019s not likely to for a long time. Maybe in next year\u2019s 24 ways there will be an article on how you can use ES6 modules with no extra work in the browser, but for now we\u2019re stuck with a bit of extra work.\n\nES6 module transpiler\n\nOne solution is to use the ES6 module transpiler, a compiler that lets you write your JavaScript using the ES6 module syntax (actually a subset of it \u2013 not quite everything is supported, but the main features are) and have it compiled into either CommonJS-style code (CommonJS is the module specification that NodeJS and Browserify use), or into AMD-style code (the spec RequireJS uses). There are also plugins for all the popular build tools, including Grunt and Gulp.\n\nThe advantage of using this transpiler is that if you are already using a tool like RequireJS or Browserify, you can drop the transpiler in, start writing in ES6 and not worry about any additional work to make the code work in the browser, because you should already have that set up already. If you don\u2019t have any system in place for handling modules in the browser, using the transpiler doesn\u2019t really make sense. Remember, all this does is convert ES6 module code into CommonJS- or AMD-compliant JavaScript. It doesn\u2019t do anything to help you get that code running in the browser, but if you have that part sorted it\u2019s a really nice addition to your workflow. If you would like a tutorial on how to do this, I wrote a post back in June 2014 on using ES6 with the ES6 module transpiler.\n\nSystemJS\n\nAnother solution is SystemJS. It\u2019s the best solution in my opinion, particularly if you are starting a new project from scratch, or want to use ES6 modules on a project where you have no current module system in place. SystemJS is a spec-compliant universal module loader: it loads ES6 modules, AMD modules, CommonJS modules, as well as modules that just add a variable to the global scope (window, in the browser).\n\nTo load in ES6 files, SystemJS also depends on two other libraries: the ES6 module loader polyfill; and Traceur. Traceur is best accessed through the bower-traceur package, as the main repository doesn\u2019t have an easy to find downloadable version. The ES6 module load polyfill implements System.import, and lets you load in files using it. Traceur is an ES6-to-ES5 module loader. It takes code written in ES6, the newest version of JavaScript, and transpiles it into ES5, the version of JavaScript widely implemented in browsers. The advantage of this is that you can play with the new features of the language today, even though they are not supported in browsers. The drawback is that you have to run all your files through Traceur every time you save them, but this is easily automated. Additionally, if you use SystemJS, the Traceur compilation is done automatically for you.\n\nAll you need to do to get SystemJS running is to add a \n\nWhen you load the page, app.js will be asynchronously loaded. Within app.js, you can now use ES6 modules. SystemJS will detect that the file is an ES6 file, automatically load Traceur, and compile the file into ES5 so that it works in the browser. It does all this dynamically in the browser, but there are tools to bundle your application in production, so it doesn\u2019t make a lot of requests on the live site. In development though, it makes for a really nice workflow.\n\nWhen working with SystemJS and modules in general, the best approach is to have a main module (in our case app.js) that is the main entry point for your application. app.js should then be responsible for loading all your application\u2019s modules. This forces you to keep your application organised by only loading one file initially, and having the rest dealt with by that file.\n\nSystemJS also provides a workflow for bundling your application together into one file.\n\nConclusion\n\nES6 modules may be at least six months to a year away (if not more) but that doesn\u2019t mean they can\u2019t be used today. Although there is an overhead to using them now \u2013 with the work required to set up SystemJS, the module transpiler, or another solution \u2013 that doesn\u2019t mean it\u2019s not worthwhile. Using any module system in the browser, whether that be RequireJS, Browserify or another alternative, requires extra tooling and libraries to support it, and I would argue that the effort to set up SystemJS is no greater than that required to configure any other tool. It also comes with the extra benefit that when the syntax is supported in browsers, you get a free upgrade. You\u2019ll be able to remove SystemJS and have everything continue to work, backed by the native browser solution.\n\nIf you are starting a new project, I would strongly advocate using ES6 modules. It is a syntax and specification that is not going away at all, and will soon be supported in browsers. Investing time in learning it now will pay off hugely further down the road.\n\nFurther reading\n\nIf you\u2019d like to delve further into ES6 modules (or ES6 generally) and using them today, I recommend the following resources:\n\n\n\tECMAScript 6 modules: the final syntax by Axel Rauschmayer\n\tPractical Workflows for ES6 Modules by Guy Bedford\n\tECMAScript 6 resources for the curious JavaScripter by Addy Osmani\n\tTracking ES6 support by Addy Osmani\n\tES6 Tools List by Addy Osmani\n\tUsing Grunt and the ES6 Module Transpiler by Thomas Boyt\n\tJavaScript Modules and Dependencies with jspm by myself\n\tUsing ES6 Modules Today by Guy Bedford", "year": "2014", "author": "Jack Franklin", "author_slug": "jackfranklin", "published": "2014-12-03T00:00:00+00:00", "url": "https://24ways.org/2014/javascript-modules-the-es6-way/", "topic": "code"} {"rowid": 38, "title": "Websites of Christmas Past, Present and Future", "contents": "The websites of Christmas past\n\nThe first website was created at CERN. It was launched on 20 December 1990 (just in time for Christmas!), and it still works today, after twenty-four years. Isn\u2019t that incredible?!\n\nWhy does this website still work after all this time? I can think of a few reasons.\n\nFirst, the authors of this document chose HTML. Of course they couldn\u2019t have known back then the extent to which we would be creating documents in HTML, but HTML always had a lot going for it. It\u2019s built on top of plain text, which means it can be opened in any text editor, and it\u2019s pretty readable, even without any parsing.\n\nDespite the fact that HTML has changed quite a lot over the past twenty-four years, extensions to the specification have always been implemented in a backwards-compatible manner. Reading through the 1992 W3C document HTML Tags, you\u2019ll see just how it has evolved. We still have h1 \u2013 h6 elements, but I\u2019d not heard of the element before. Despite being deprecated since HTML2, it still works in several browsers. You can see it in action on my website.\n\nAs well as being written in HTML, there is no run-time compilation of code; the first website simply consists of HTML files transmitted over the web. Due to its lack of complexity, it stood a good chance of surviving in the turbulent World Wide Web.\n\nThat\u2019s all well and good for a simple, static website. But websites created today are increasingly interactive. Many require a login and provide experiences that are tailored to the individual user. This type of dynamic website requires code to be executed somewhere.\n\nTraditionally, dynamic websites would execute such code on the server, and transmit a simple HTML file to the user. As far as the browser was concerned, this wasn\u2019t much different from the first website, as the additional complexity all happened before the document was sent to the browser.\n\nDoing it all in the browser\n\nIn 2003, the first single page interface was created at slashdotslash.com. A single page interface or single page app is a website where the page is created in the browser via JavaScript. The benefit of this technique is that, after the initial page load, subsequent interactions can happen instantly, or very quickly, as they all happen in the browser.\n\nWhen software runs on the client rather than the server, it is often referred to as a fat client. This means that the bulk of the processing happens on the client rather than the server (which can now be thin).\n\nA fat client is preferred over a thin client because:\n\n\n\tIt takes some processing requirements away from the server, thereby reducing the cost of servers (a thin server requires cheaper, or fewer servers).\n\tThey can often continue working offline, provided no server communication is required to complete tasks after initial load.\n\tThe latency of internet communications is bypassed after initial load, as interactions can appear near instantaneous when compared to waiting for a response from the server.\n\n\nBut there are also some big downsides, and these are often overlooked:\n\n\n\tThey can\u2019t work without JavaScript. Obviously JavaScript is a requirement for any client-side code execution. And as the UK Government Digital Service discovered, 1.1% of their visitors did not receive JavaScript enhancements. Of that 1.1%, 81% had JavaScript enabled, but their browsers failed to execute it (possibly due to dropping the internet connection). If you care about 1.1% of your visitors, you should care about the non-JavaScript experience for your website.\n\tThe browser needs to do all the processing. This means that the hardware it runs on needs to be fast. It also means that we require all clients to have largely the same capabilities and browser APIs.\n\tThe initial payload is often much larger, and nothing will be rendered for the user until this payload has been fully downloaded and executed. If the connection drops at any point, or the code fails to execute owing to a bug, we\u2019re left with the non-JavaScript experience.\n\tThey are not easily indexed as every crawler now needs to run JavaScript just to receive the content of the website.\n\n\nThese are not merely edge case issues to shirk off. The first three issues will affect some of your visitors; the fourth affects everyone, including you.\n\nWhat problem are we trying to solve?\n\nSo what can be done to address these issues? Whereas fat clients solve some inherent issues with the web, they seem to create as many problems. When attempting to resolve any issue, it\u2019s always good to try to uncover the original problem and work forwards from there. One of the best ways to frame a problem is as a user story. A user story considers the who, what and why of a need. Here\u2019s a template:\n\n\n\tAs a {who} I want {what} so that {why}\n\n\nI haven\u2019t got a specific project in mind, so let\u2019s refer to the who as user. Here\u2019s one that could explain the use of thick clients.\n\n\n\tAs a user I want the site to respond to my actions quickly so that I get immediate feedback when I do something.\n\n\nThis user story could probably apply to a great number of websites, but so could this:\n\n\n\tAs a user I want to get to the content quickly, so that I don\u2019t have to wait too long to find out what the site is all about or get the content I need.\n\n\nA better solution\n\nHow can we balance both these user needs? How can we have a website that loads fast, and also reacts fast? The solution is to have a thick server, that serves the complete document, and then a thick client, that manages subsequent actions and replaces parts of the page. What we\u2019re talking about here is simply progressive enhancement, but from the user\u2019s perspective.\n\nThe initial payload contains the entire document. At this point, all interactions would happen in a traditional way using links or form elements. Then, once we\u2019ve downloaded the JavaScript (asynchronously, after load) we can enhance the experience with JavaScript interactions. If for whatever reason our JavaScript fails to download or execute, it\u2019s no biggie \u2013 we\u2019ve already got a fully functioning website. If an API that we need isn\u2019t available in this browser, it\u2019s not a problem. We just fall back to the basic experience.\n\nThis second point, of having some minimum requirement for an enhanced experience, is often referred to as cutting the mustard, first used in this sense by the BBC News team. Essentially it\u2019s an if statement like this:\n\nif('querySelector' in document\n && 'localStorage' in window\n && 'addEventListener' in window) {\n // bootstrap the JavaScript application\n }\n\nThis code states that the browser must support the following methods before downloading and executing the JavaScript:\n\n\n\tdocument.querySelector (can it find elements by CSS selectors)\n\twindow.localStorage (can it store strings)\n\twindow.addEventListener (can it bind to events in a standards-compliant way)\n\n\nThese three properties are what the BBC News team decided to test for, as they are present in their website\u2019s JavaScript. Each website will have its own requirements. The last method, window.addEventListener is in interesting one. Although it\u2019s simple to bind to events on IE8 and earlier, these browsers have very inconsistent support for standards. Making any JavaScript-heavy website work on IE8 and earlier is a painful exercise, and comes at a cost to all users on other browsers, as they\u2019ll download unnecessary code to patch support for IE.\n\n JavaScript API support by browser.\n\nI discovered that IE8 supports 12% of the current JavaScript APIs, while IE9 supports 16%, and IE10 51%. It seems, then, that IE10 could be the earliest version of IE that I\u2019d like to develop JavaScript for. That doesn\u2019t mean that users on browsers earlier than 10 can\u2019t use the website. On the contrary, they get the core experience, and because it\u2019s just HTML and CSS, it\u2019s much more likely to be bug-free, and could even provide a better experience than trying to run JavaScript in their browser. They receive the thin client experience.\n\nBy reducing the number of platforms that our enhanced JavaScript version supports, we can better focus our efforts on those platforms and offer an even greater experience to those users. But we can only do that if we use progressive enhancement. Otherwise our website would be completely broken for all other users.\n\nSo what we have is a thick server, capable of serving the entire website to our users, complete with all core functionality needed for our users to complete their tasks; and we have a thick client on supported browsers, which can bring an even greater experience to those users.\n\nThis is all transparent to users. They may notice that the website seems snappier on the new iPhone they received for Christmas than on the Windows 7 machine they got five years ago, but then they probably expected it to be faster on their iPhone anyway.\n\nIsn\u2019t this just more work?\n\nIt\u2019s true that making a thick server and a thick client is more work than just making one or the other. But there are some big advantages:\n\n\n\tThe website works for everyone.\n\tYou can decide when users get the enhanced experience.\n\tYou can enhance features in an iterative (or agile) manner.\n\tWhen the website breaks, it doesn\u2019t break down.\n\tThe more you practise this approach, the quicker you will become.\n\n\nThe websites of Christmas present\n\nThe best way to discover websites using this technique of progressive enhancement is to disable JavaScript and see if the website breaks. I use the Web Developer extension, which is available for Chrome and Firefox. It lets me quickly disable JavaScript.\n\n Web Developer extension.\n\n24 ways works with and without JavaScript. Try using the menu icon to view the navigation. Without JavaScript, it\u2019s a jump link to the bottom of the page, but with JavaScript, the menu slides in from the right.\n\n 24 ways navigation with JavaScript disabled.\n\n 24 ways navigation with working JavaScript.\n\nGoogle search will also work without JavaScript. You won\u2019t get instant search results or any prerendering, because those are enhancements.\n\nFor a more app-like example, try using Twitter. Without JavaScript, it still works, and looks nearly identical. But when you load JavaScript, links open in modal windows and all pages are navigated much quicker, as only the content that has changed is loaded. You can read about how they achieved this in Twitter\u2019s blog posts Improving performance on twitter.com and Implementing pushState for twitter.com.\n\nUnfortunately Facebook doesn\u2019t use progressive enhancement, which not only means that the website doesn\u2019t work without JavaScript, but it takes longer to load. I tested it on WebPagetest and if you compare the load times of Twitter and Facebook, you\u2019ll notice that, despite putting similar content on the page, Facebook takes two and a half times longer to render the core content on the page.\n\n Facebook takes two and a half times longer to load than Twitter.\n\nWebsites of Christmas yet to come\n\nEvery project is different, and making a website that enjoys a long life, or serves a larger number of users may or may not be a high priority. But I hope I\u2019ve convinced you that it certainly is possible to look to the past and future simultaneously, and that there can be significant advantages to doing so.", "year": "2014", "author": "Josh Emerson", "author_slug": "joshemerson", "published": "2014-12-08T00:00:00+00:00", "url": "https://24ways.org/2014/websites-of-christmas-past-present-and-future/", "topic": "code"} {"rowid": 42, "title": "An Overview of SVG Sprite Creation Techniques", "contents": "SVG can be used as an icon system to replace icon fonts. The reasons why SVG makes for a superior icon system are numerous, but we won\u2019t be going over them in this article. If you don\u2019t use SVG icons and are interested in knowing why you may want to use them, I recommend you check out \u201cInline SVG vs Icon Fonts\u201d by Chris Coyier \u2013 it covers the most important aspects of both systems and compares them with each other to help you make a better decision about which system to choose.\n\nOnce you\u2019ve made the decision to use SVG instead of icon fonts, you\u2019ll need to think of the best way to optimise the delivery of your icons, and ways to make the creation and use of icons faster.\n\nJust like bitmaps, we can create image sprites with SVG \u2013 they don\u2019t look or work exactly alike, but the basic concept is pretty much the same.\n\nThere are several ways to create SVG sprites, and this article will give you an overview of three of them. While we\u2019re at it, we\u2019re going to take a look at some of the available tools used to automate sprite creation and fallback for us.\n\nPrerequisites\n\nThe content of this article assumes you are familiar with SVG. If you\u2019ve never worked with SVG before, you may want to look at some of the introductory tutorials covering SVG syntax, structure and embedding techniques. I recommend the following:\n\n\n\tSVG basics: Using SVG.\n\tStructure: Structuring, Grouping, and Referencing in SVG \u2014 The <g>, <use>, <defs> and <symbol> Elements. We\u2019ll mention <use> and <symbol> quite a bit in this article.\n\tEmbedding techniques: Styling and Animating SVGs with CSS. The article covers several topics, but the section linked focuses on embedding techniques.\n\tA compendium of SVG resources compiled by Chris Coyier \u2014 contains resources to almost every aspect of SVG you might be interested in.\n\n\nAnd if you\u2019re completely new to the concept of spriting, Chris Coyier\u2019s CSS Sprites explains all about them.\n\nAnother important SVG feature is the viewBox attribute. For some of the techniques, knowing your way around this attribute is not required, but it\u2019s definitely more useful if you understand \u2013 even if just vaguely \u2013 how it works. The last technique mentioned in the article requires that you do know the attribute\u2019s syntax and how to use it. To learn all about viewBox, you can refer to my blog post about SVG coordinate systems.\n\nWith the prerequisites in place, let\u2019s move on to spriting SVGs!\n\nBefore you sprite\u2026\n\nIn order to create an SVG sprite with your icons, you\u2019ll of course need to have these icons ready for use.\n\nSome spriting tools require that you place your icons in a folder to which a certain spriting process is to be applied. As such, for all of the upcoming sections we\u2019ll work on the assumption that our SVG icons are placed in a folder named SVG.\n\nEach icon is an individual .svg file.\n\nYou\u2019ll need to make sure each icon is well-prepared and optimised for use \u2013 make sure you\u2019ve cleaned up the code by running it through one of the optimisation tools or processes available (or doing it manually if it\u2019s not tedious).\n\nAfter prepping the icon files and placing them in a folder, we\u2019re ready to create our SVG sprite.\n\nHTML inline SVG sprites\n\nSince SVG is XML code, it can be embedded inline in an HTML document as a code island using the <svg> element. Chris Coyier wrote about this technique first on CSS-Tricks.\n\nThe embedded SVG will serve as a container for our icons and is going to be the actual sprite we\u2019re going to use. So we\u2019ll start by including the SVG in our document.\n\n<!DOCTYPE html>\n<!-- HTML document stuff -->\n\n<svg style=\"display:none;\">\n <!-- icons here -->\n</svg>\n\n<!-- other document stuff -->\n</html>\n\nNext, we\u2019re going to place the icons inside the <svg>. Each icon will be wrapped in a <symbol> element we can then reference and use elsewhere in the page using the SVG <use> element. The <symbol> element has many benefits, and we\u2019re using it because it allows us to define a symbol (which is a convenient markup for an icon) without rendering that symbol on the screen. The elements defined inside <symbol> will only be rendered when they are referenced \u2013 or called \u2013 by the <use> element.\n\nMoreover, <symbol> can have its own viewBox attribute, which makes it possible to control the positioning of its content inside its container at any time.\n\nBefore we move on, I\u2019d like to shed some light on the style=\"display:none;\" part of the snippet above. Without setting the display of the SVG to none, and even though its contents are not rendered on the page, the SVG will still take up space in the page, resulting in a big empty area. In order to avoid that, we\u2019re hiding the SVG entirely with CSS.\n\nNow, suppose we have a Twitter icon in the icons folder. twitter.svg might look something like this:\n\n<!-- twitter.svg -->\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\" \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n<svg version=\"1.1\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" width=\"32\" height=\"32\" viewBox=\"0 0 32 32\">\n<path d=\"M32 6.076c-1.177 0.522-2.443 0.875-3.771 1.034 1.355-0.813 2.396-2.099 2.887-3.632-1.269 0.752-2.674 1.299-4.169 1.593-1.198-1.276-2.904-2.073-4.792-2.073-3.626 0-6.565 2.939-6.565 6.565 0 0.515 0.058 1.016 0.17 1.496-5.456-0.274-10.294-2.888-13.532-6.86-0.565 0.97-0.889 2.097-0.889 3.301 0 2.278 1.159 4.287 2.921 5.465-1.076-0.034-2.088-0.329-2.974-0.821-0.001 0.027-0.001 0.055-0.001 0.083 0 3.181 2.263 5.834 5.266 6.437-0.551 0.15-1.131 0.23-1.73 0.23-0.423 0-0.834-0.041-1.235-0.118 0.835 2.608 3.26 4.506 6.133 4.559-2.247 1.761-5.078 2.81-8.154 2.81-0.53 0-1.052-0.031-1.566-0.092 2.905 1.863 6.356 2.95 10.064 2.95 12.076 0 18.679-10.004 18.679-18.68 0-0.285-0.006-0.568-0.019-0.849 1.283-0.926 2.396-2.082 3.276-3.398z\" fill=\"#000000\"></path>\n</svg>\n\nWe don\u2019t need the root svg element, so we\u2019ll strip the code and only keep the parts that make up the Twitter icon\u2019s shape, which in this example is just the <path> element.Let\u2019s drop that into the sprite container like so:\n\n<svg style=\"display:none;\">\n <symbol id=\"twitter-icon\" viewBox=\"0 0 32 32\">\n <path d=\"M32 6.076c-1.177 \u2026\" fill=\"#000000\"></path>\n </symbol>\n\n <!-- remaining icons here -->\n <symbol id=\"instagram-icon\" viewBox=\"0 0 32 32\">\n <!-- icon contents -->\n </symbol>\n\n <!-- etc. -->\n</svg>\n\nRepeat for the other icons.\n\nThe value of the <symbol> element\u2019s viewBox attribute depends on the size of the SVG. You don\u2019t need to know how the viewBox works to use it in this case. Its value is made up of four parts: the first two will almost always be \u201c0 0\u201d; the second two will be equal to the size of the icon. For example, our Twitter icon is 32px by 32px (see twitter.svg above), so the viewBox value is \u201c0 0 32 32\u201d.\n\nThat said, it is certainly useful to understand how the viewBox works \u2013 it can help you troubleshoot SVG sometimes and gives you better control over it, allowing you to scale, position and even crop SVGs manually without having to resort to an editor. My blog post explains all about the viewBox attribute and its related attributes.\n\nOnce you have your SVG sprite ready, you can display the icons anywhere on the page by referencing them using the SVG <use> element:\n\n<svg class=\"twitter-icon\">\n <use xlink:href=\"#twitter-icon\"></use>\n<svg>\n\nAnd that\u2019s all there is to it!\n\nHTML-inline SVG sprites are simple to create and use, but when you have a lot of icons (and the more icon sets you create) it can easily become daunting if you have to manually transfer the icons into the <svg>. Fortunately, you don\u2019t have to do that. Fabrice Weinberg created a Grunt plugin called grunt-svgstore which takes the icons in your SVG folder and generates the SVG sprites for you; all you have to do is just drop the sprites into your page and use the icons like we did earlier.\n\nThis technique works in all browsers supporting SVG. There seems to be a bug in Safari on iOS which causes the icons not to show up when the SVG sprite is defined at the bottom of the document after the <use> references to the icons, so it\u2019s safest to include the sprite before you use the icons until this bug is fixed.\n\nThis technique has one disadvantage: the SVG sprite cannot be cached. We\u2019re saving an extra HTTP request here but the browser cannot cache the image, so we aren\u2019t speeding up any subsequent page loads by inlining the SVG. There must be a better way \u2013 and there is.\n\nStyling the icons is possible, but getting deep into the styles becomes a bit harder owing to the nature of the contents of the <use> element \u2013 these contents are cloned into a shadow DOM, and hence selecting elements in CSS the traditional way is not possible. However, some techniques to work around that do exist, and give us slightly more styling flexibility. Animations work as expected.\n\nReferencing an external SVG sprite in HTML\n\nInstead of including the SVG inline in the document, you can reference the sprite and the icons inside it externally, taking advantage of fragment identifiers to select individual icons in the sprite.\n\nFor example, the above reference to the Twitter icon would look something like this instead:\n\n<svg class=\"twitter-icon\">\n <use xlink:href=\"path/to/icons.svg#twitter-icon\"></use>\n<svg>\n\n\nicons.svg is the name of the SVG file that contains all of our icons as symbols, and the fragment identifier #twitter-icon is the reference to the <symbol> wrapping the Twitter icon\u2019s contents. Very convenient, isn\u2019t it? The browser will request the sprite and then cache it, speeding up subsequent page loads. Win!\n\nThis technique also works in all browsers supporting SVG except Internet Explorer \u2013 not even IE9+ with SVG support permits this technique. No version of IE supports referencing an external SVG in <use>.\n\nFortunately (again), Jonathan Neil has created a plugin called svg4everybody which fills this gap in IE; you can reference an external sprite in <use> and also provide fallback for browsers that do not support SVG. However, it requires you to have the fallback images (PNG or JPEG, for example) available to do so. For details, refer to the plugin\u2019s Github repository\u2019s readme file.\n\nCSS inline SVG sprites\n\nAnother way to create an SVG sprite is by inlining the SVG icons in a style sheet using data URIs, and providing fallback for non-supporting browsers \u2013 also within the CSS.\n\nUsing this approach, we\u2019re turning the style sheet into the sprite that includes our icons. The style sheet is normally cached by the browser, so we have that concern out of the way.\n\nThis technique is put into practice in Filament Group\u2019s icon system approach, which uses their Grunticon plugin \u2013 or its sister Grumpicon web app \u2013 for generating the necessary CSS for the sprite. As such, we\u2019re going to cover this technique by following a workflow that uses one of these tools.\n\nAgain, we start with our icon SVG files. To focus on the actual spriting method and not on the tooling, I\u2019ll go over the process of sprite creation using the Grumpicon web app, instead of the Grunticon plugin. Both tools generate the same resources that we\u2019re going to use for the icon system. Whether you choose the web app or the Grunt set-up, after processing your SVG folder you\u2019re going to end up with the same set of resources that we\u2019ll be using throughout this section.\n\nThe first step is to drop your icons into the Grumpicon web app.\n\n Grumpicon homepage screenshot.\n\nThe application will then show you a preview of your icons, and a download button will allow you to download the generated files. These files will contain everything you need for your icon system \u2013 all that\u2019s left is for you to drop the generated files and code into your project as recommended and you\u2019ll have your sprite and icons ready to use anywhere you want in your page.\n\nGrumpicon generates five files and one folder in the downloaded package: a png folder containing PNG versions of your icons; three style sheets (that we\u2019ll go over briefly); a loader script file; and preview.html which is a live example showing you the other files in action.\n\nThe script in the loader goes into the <head> of your page. This script handles browser and feature detection, and requests the necessary style sheet depending on browser support for SVG and base64 data URIs. If you view the source code of the preview page, you can see exactly how the script is added.\n\nicons.data.svg.css is the style sheet that contains your icons \u2013 the sprite. The icons are embedded inline inside the style sheet using data URIs, and applied to elements of your choice as background images, using class names. For example:\n\n.twitter-icon{\n background-image: url('data:image/svg+xml;\u2026'); /* the ellipsis is where the icon\u2019s data would go */\n background-repeat: no-repeat;\n background-position: 50% 50%;\n height: 2em;\n width: 2em;\n /* etc. */\n}\n\nThen, you only have to apply the twitter-icon class name to an element in your HTML to apply the icon as a background to it:\n\n<span class=\"twitter-icon\"></span>\n\nAnd that\u2019s all you need to do to get an icon on the page.\n\nicons.data.svg.css, along with the other two style sheets and the png folder should be added to your CSS folder.\n\nicons.data.png.css is the style sheet the script will load in browsers that don\u2019t support SVG, such as IE8. Fallback for the inline SVG is provided as a base64-encoded PNG. For instance, the fallback for the Twitter icon from our example would look like so:\n\n.twitter-icon{\n background-image: url('data:image/png;base64;\u2026\u2019);\n /* etc. */\n}\n\nicons.fallback.css is the style sheet required for browsers that don\u2019t support base64-encoded PNGs \u2013 the PNG images are loaded as usual using the image\u2019s URL. The script will load this style sheet for IE6 and IE7, for example.\n\n.twitter-icon{\n background-image: url(png/twitter-icon.png);\n /* etc. */\n}\n\nThis technique is very different from the previous one. The sprite in this case is literally the style sheet, not an SVG container, and the icon usage is very similar to that of a CSS sprite \u2013 the icons are provided as background images.\n\nThis technique has advantages and disadvantages. For the sake of brevity, I won\u2019t go into further details, but the main limitations worth mentioning are that SVGs embedded as background images cannot be styled with CSS; and animations are restricted to those defined inside the <svg> for each icon. CSS interactions (such as hover effects) don\u2019t work either. Thus, to apply an effect for an icon that changes its color on hover, for example, you\u2019ll need to export a set of SVGs for each colour in order for Grumpicon to create matching fallback PNG images that can then be used for the animation.\n\nFor more details about the Grumpicon workflow, I recommend you check out \u201cA Designer\u2019s Guide to Grumpicon\u201d on Filament Group\u2019s website.\n\nUsing SVG fragment identifiers and views\n\nThis spriting technique is, again, different from the previous ones, and it is my personal favourite.\n\nSVG comes with a standard way of cropping to a specific area in a particular SVG image. If you\u2019ve ever worked with CSS sprites before then this definitely sounds familiar: it\u2019s almost exactly what we do with CSS sprites \u2013 the image containing all of the icons is cropped, so to speak, to show only the one icon that we want in the background positioning area of the element, using background size and positioning properties.\n\nInstead of using background properties, we\u2019ll be using SVG\u2019s viewBox attribute to crop our SVG to the specific icon we want.\n\nWhat I like about this technique is that it is more visual than the previous ones. Using this technique, the SVG sprite is treated like an actual image containing other images (the icons), instead of treating it as a piece of code containing other code.\n\nAgain, our SVG icons are placed inside a main SVG container that is going to be our SVG sprite. If you\u2019re working in a graphics editor, position or arrange your icons inside the canvas any way you want them to be, and then export the graphic as is. Of course, the less empty space there is in your SVG, the better.\n\nIn our example, the sprite contains three icons as shown in the following image. The sprite is open in Sketch. Notice how the SVG is just big enough to fit the icons inside it. It doesn\u2019t have to be like this, but it\u2019s cleaner this way.\n\n Screenshot showing the SVG sprite containing our icons.\n\nNow, suppose you want to display only the Instagram icon. Using the SVG viewBox attribute, we can crop the SVG to the icon. The Instagram icon is positioned at 64px along the positive x-axis, and zero pixels along the y-axis. It is also 32px by 32px in size.\n\n Screenshot showing the position (offset) of the Instagram icon inside the SVG sprite, and its size.\n\nUsing this information, we can specify the value of the viewBox as: 64 0 32 32. This area of the view box contains only the Instagram icon. 64 0 specifies the top-left corner of the view box area, and 32 32 specify its dimensions.\n\nNow, if we were to change the viewBox value on the SVG sprite to this value, only the Instagram icon will be visible inside the SVG viewport. Great. But how do we use this information to display the icon in our page using our sprite?\n\nSVG comes with a native way to link to portions or areas of an image using fragment identifiers. Fragment identifiers are used to link into a particular view area of an SVG document. Thus, using a fragment identifier and the boundaries of the area that we want (from the viewBox), we can link to that area and display it.\n\nFor example, if you want to display the icon from the sprite using an <img> tag, you can reference the icon in the sprite like so:\n\n<img src='uiIcons.svg#svgView(viewBox(64, 0, 32, 32))' alt=\"Settings icon\"/>\n\nThe fragment identifier in the snippet above (#svgView(viewBox(64, 0, 32, 32))) is the important part. This will result in only the Instagram icon\u2019s area of the sprite being displayed.\n\nThere is also another way to do this, using the SVG <view> element. The <view> element can be used to define a view area and then reference that area somewhere else. For example, to define the view box containing the Instagram icon, we can do the following:\n\n<view id='instagram-icon' viewBox='64 0 32 32' />\n\nThen, we can reference this view in our <img> element like this:\n\n<img src='sprite.svg#instagram-icon' alt=\"Instagram icon\" />\n\nThe best part about this technique \u2013 besides the ability to reference an external SVG and hence make use of browser caching \u2013 is that it allows us to use practically any SVG embedding technique and does not restrict us to specific tags.\n\nIt goes without saying that this feature can be used for more than just icon systems, owing to viewBox\u2019s power in controlling an SVG\u2019s viewable area.\n\nSVG fragment identifiers have decent browser support, but the technique is buggy in Safari: there is a bug that causes problems when loading a server SVG file and then using fragment identifiers with it. Bear Travis has documented the issue and a workaround.\n\nWhere to go from here\n\nPick the technique that works best for your project. Each technique has its own pros and cons, relating to convenience and maintainability, performance, and styling and scripting. Each technique also requires its own fallback mechanism.\n\nThe spriting techniques mentioned here are not the only techniques available. Other methods exist, such as SVG stacks, and others may surface in future, but these are the three main ones today.\n\nThe third technique using SVG\u2019s built-in viewBox features is my favourite, and with better browser support and fewer (ideally, no) bugs, I believe it is more likely to become the standard way to create and use SVG sprites. Fallback techniques can be created, of course, in one of many possible ways.\n\nDo you use SVG for your icon system? If so, which is your favourite technique? Do you know or have worked with other ways for creating SVG sprites?", "year": "2014", "author": "Sara Soueidan", "author_slug": "sarasoueidan", "published": "2014-12-16T00:00:00+00:00", "url": "https://24ways.org/2014/an-overview-of-svg-sprite-creation-techniques/", "topic": "code"} {"rowid": 46, "title": "Responsive Enhancement", "contents": "24 ways has been going strong for ten years. That\u2019s an aeon in internet timescales. Just think of all the changes we\u2019ve seen in that time: the rise of Ajax, the explosion of mobile devices, the unrecognisably changed landscape of front-end tooling.\n\nTools and technologies come and go, but one thing has remained constant for me over the past decade: progressive enhancement.\n\nProgressive enhancement isn\u2019t a technology. It\u2019s more like a way of thinking. Instead of thinking about the specifics of how a finished website might look, progressive enhancement encourages you to think about the fundamental meaning of what the website is providing. So instead of thinking of a website in terms of its ideal state in a modern browser on a nice widescreen device, progressive enhancement allows you to think about the core functionality in a more abstract way.\n\nOnce you\u2019ve figured out what the core functionality is \u2013 adding an item to a shopping cart, posting a message, sharing a photo \u2013 then you can enable that functionality in the simplest possible way. That usually means starting with good old-fashioned HTML. Links and forms are often all you need. Then, once you have the core functionality working in a basic way, you can start to enhance to make a progressively better experience for more modern browsers.\n\nThe advantage of working this way isn\u2019t just that your site will work in older browsers (albeit in a rudimentary way). It also ensures that if anything goes wrong in a modern browser, it won\u2019t be catastrophic.\n\nThere\u2019s a common misconception that progressive enhancement means that you\u2019ll spend your time dealing with older browsers, but in fact the opposite is true. Putting the basic functionality into place doesn\u2019t take very long at all. And once you\u2019ve done that, you\u2019re free to spend all your time experimenting with the latest and greatest browser technologies, secure in the knowledge that even if they aren\u2019t universally supported yet, that\u2019s OK: you\u2019ve already got your fallback in place.\n\nThe key to thinking about web development this way is realising that there isn\u2019t one final interface \u2013 there could be many, slightly different interfaces depending on the properties and capabilities of any particular user agent at any particular moment. And that\u2019s OK. Websites do not need to look the same in every browser.\n\nOnce you truly accept that, it\u2019s an immensely liberating idea. Instead of spending your time trying to make websites look the same in wildly varying browsers, you can spend your time making sure that the core functionality of what you build works everywhere, while providing the best possible experience for more capable browsers.\n\nAllow me to demonstrate with a simple example: navigation.\n\nStep one: core functionality\n\nLet\u2019s say we have a straightforward website about the twelve days of Christmas, with a page for each day. The core functionality is pretty clear:\n\n\n\tTo read about any particular day.\n\tTo browse from day to day.\n\n\nThe first is easily satisfied by marking up the text with headings, paragraphs and all the usual structural HTML elements. The second is satisfied by providing a list of good ol\u2019 hyperlinks.\n\nNow where\u2019s the best place to position this navigation list? Personally, I\u2019m a big fan of the jump-to-footer pattern. This puts the content first and the navigation second. At the top of the page there\u2019s a link with an href attribute pointing to the fragment identifier for the navigation.\n\n<body>\n <main role=\"main\" id=\"top\">\n <a href=\"#menu\" class=\"control\">Menu</a>\n ...\n </main>\n <nav role=\"navigation\" id=\"menu\">\n ...\n <a href=\"#top\" class=\"control\">Dismiss</a>\n </nav>\n</body>\n\nSee the footer-anchor pattern in action.\n\nBecause it\u2019s nothing more than a hyperlink, this works in just about every browser since the dawn of the web. Following hyperlinks is what web browsers were made to do (hence the name).\n\nStep two: layout as an enhancement\n\nThe footer-anchor pattern is a particularly neat solution on small-screen devices, like mobile phones. Once more screen real estate is available, I can use the magic of CSS to reposition the navigation above the content. I could use position: absolute, flexbox or, in this case, display: table.\n\n@media all and (min-width: 35em) {\n .control {\n display: none;\n }\n body {\n display: table;\n }\n [role=\"navigation\"] {\n display: table-caption;\n columns: 6 15em;\n }\n}\n\nSee the styles for wider screens in action\n\nStep three: enhance!\n\nRight. At this point I\u2019m providing core functionality to everyone, and I\u2019ve got nice responsive styles for wider screens. I could stop here, but the real advantage of progressive enhancement is that I don\u2019t have to. From here on, I can go crazy adding all sorts of fancy enhancements for modern browsers, without having to worry about providing a fallback for older browsers \u2013 the fallback is already in place.\n\nWhat I\u2019d really like is to provide a swish off-canvas pattern for small-screen devices. Here\u2019s my plan:\n\n\n\tPosition the navigation under the main content.\n\tListen out for the .control links being activated and intercept that action.\n\tWhen those links are activated, toggle a class of .active on the body.\n\tIf the .active class exists, slide the content out to reveal the navigation.\n\n\nHere\u2019s the CSS for positioning the content and navigation:\n\n@media all and (max-width: 35em) {\n [role=\"main\"] {\n transition: all .25s;\n width: 100%;\n position: absolute;\n z-index: 2;\n top: 0;\n right: 0;\n }\n [role=\"navigation\"] {\n width: 75%;\n position: absolute;\n z-index: 1;\n top: 0;\n right: 0;\n }\n .active [role=\"main\"] {\n transform: translateX(-75%);\n }\n}\n\nIn my JavaScript, I\u2019m going to listen out for any clicks on the .control links and toggle the .active class on the body accordingly:\n\n(function (win, doc) {\n 'use strict';\n var linkclass = 'control',\n activeclass = 'active',\n toggleClassName = function (element, toggleClass) {\n var reg = new RegExp('(s|^)' + toggleClass + '(s|$)');\n if (!element.className.match(reg)) {\n element.className += ' ' + toggleClass;\n } else {\n element.className = element.className.replace(reg, '');\n }\n },\n navListener = function (ev) {\n ev = ev || win.event;\n var target = ev.target || ev.srcElement;\n if (target.className.indexOf(linkclass) !== -1) {\n ev.preventDefault();\n toggleClassName(doc.body, activeclass);\n }\n };\n doc.addEventListener('click', navListener, false);\n}(this, this.document));\n\nI\u2019m all set, right? Not so fast!\n\nCutting the mustard\n\nI\u2019ve made the assumption that addEventListener will be available in my JavaScript. That isn\u2019t a safe assumption. That\u2019s because JavaScript \u2013 unlike HTML or CSS \u2013 isn\u2019t fault-tolerant. If you use an HTML element or attribute that a browser doesn\u2019t understand, or if you use a CSS selector, property or value that a browser doesn\u2019t understand, it\u2019s no big deal. The browser will just ignore what it doesn\u2019t understand: it won\u2019t throw an error, and it won\u2019t stop parsing the file.\n\nJavaScript is different. If you make an error in your JavaScript, or use a JavaScript method or property that a browser doesn\u2019t recognise, that browser will throw an error, and it will stop parsing the file. That\u2019s why it\u2019s important to test for features before using them in JavaScript. That\u2019s also why it isn\u2019t safe to rely on JavaScript for core functionality.\n\nIn my case, I need to test for the existence of addEventListener:\n\n(function (win, doc) {\n if (!win.addEventListener) {\n return;\n }\n ...\n}(this, this.document));\n\nThe good folk over at the BBC call this kind of feature test cutting the mustard. If a browser passes the test, it cuts the mustard, and so it gets the enhancements. If a browser doesn\u2019t cut the mustard, it doesn\u2019t get the enhancements. And that\u2019s fine because, remember, websites don\u2019t need to look the same in every browser.\n\nI want to make sure that my off-canvas styles are only going to apply to mustard-cutting browsers. I\u2019m going to use JavaScript to add a class of .cutsthemustard to the document:\n\n(function (win, doc) {\n if (!win.addEventListener) {\n return;\n }\n ...\n var enhanceclass = 'cutsthemustard';\n doc.documentElement.className += ' ' + enhanceclass;\n}(this, this.document));\n\nNow I can use the existence of that class name to adjust my CSS:\n\n@media all and (max-width: 35em) {\n .cutsthemustard [role=\"main\"] {\n transition: all .25s;\n width: 100%;\n position: absolute;\n z-index: 2;\n top: 0;\n right: 0;\n }\n .cutsthemustard [role=\"navigation\"] {\n width: 75%;\n position: absolute;\n z-index: 1;\n top: 0;\n right: 0;\n }\n .cutsthemustard .active [role=\"main\"] {\n transform: translateX(-75%);\n }\n}\n\nSee the enhanced mustard-cutting off-canvas navigation. Remember, this only applies to small screens so you might have to squish your browser window.\n\nEnhance all the things!\n\nThis was a relatively simple example, but it illustrates the thinking behind progressive enhancement: once you\u2019re providing the core functionality to everyone, you\u2019re free to go crazy with all the latest enhancements for modern browsers.\n\nProgressive enhancement doesn\u2019t mean you have to provide all the same functionality to everyone \u2013 quite the opposite. That\u2019s why it\u2019s key to figure out early on what the core functionality is, and make sure that it can be provided with the most basic technology. But from that point on, you\u2019re free to add many more features that aren\u2019t mission-critical. You should reward more capable browsers by giving them more of those features, such as animation in CSS, geolocation in JavaScript, and new input types in HTML.\n\nLike I said, progressive enhancement isn\u2019t a technology. It\u2019s a way of thinking. Once you start thinking this way, you\u2019ll be prepared for whatever the next ten years throws at us.", "year": "2014", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2014-12-09T00:00:00+00:00", "url": "https://24ways.org/2014/responsive-enhancement/", "topic": "code"}