24ways

Custom SQL query returning 101 rows (hide)

Query parameters

rowidtitlecontentsyearauthorauthor_slugpublishedurltopic
277 Raising the Bar on Mobile One of the primary challenges of designing for mobile devices is that screen real estate is often in limited supply. Through the advocacy of Luke W and others, we’ve drawn comfort from the idea that this constraint ends up benefiting users and designers alike, from obvious advantages like portability and reach, to influencing our content strategy decisions through focus and restraint. But that doesn’t mean we shouldn’t take advantage of every last pixel of that screen we can snag! As anyone who has designed a website for use on a smartphone can attest, there’s an awful lot of space on mobile screens dedicated to browser functions that would be better off toggled out of view. Unfortunately, the visibility of some of these elements is beyond our control, such as the buttons fixed to the bottom of the viewport in iOS’s Safari and the WebOS browser. However, in many devices, the address bar at the top can be manually hidden, and its absence frees up enough pixel room for a large, impactful heading, a critical piece of navigation, or even just a little more white space to air things out. So, as my humble contribution to this most festive of web publications, today I’ll dig into the approach I used to hide the address bar in a browser-agnostic fashion for sites like BostonGlobe.com, and the jQuery Mobile framework. Surveying the land First, let’s assess the chromes of some popular, current mobile browsers. For example purposes, the following screen-captures feature the homepage of the Boston Globe site, without any address-bar-hiding logic in place. Note: these captures are just mockups – actual experience on these platforms may vary. On the left is iOS5’s Safari (running on iPhone), and on the right is Windows Phone 7 (pre-Mango). BlackBerry 7 (left), and Android 2.3 (right). WebOS (left), Opera Mini (middle), and Opera Mobile (right). Some browsers, such the default browsers on WebOS and BlackBerry 5, hide the bar automatically without any developer intervention, but many of them don’t. Of these, we can only manually hide the address bar on iOS Safari and Android (according to Opera Web Opener, Mike Taylor, some discussion is underway for support in Opera Mini and Mobile as well, which would be great!). This is unfortunate, but iOS and Android are incredibly popular, so let’s direct our focus there. Great API, or greatest API? As it turns out, iOS and Android not only allow you to hide the address bar, they use the same JavaScript method to do so, too (this shouldn’t be surprising, given that they are both WebKit browsers, but nothing expected happens in mobile). However, the method they use is not exactly intuitive. You might set out looking for a JavaScript API dedicated to this purpose, like, say, window.toolbar.hide(), but alas, to hide the address bar you need to use the window.scrollTo method! window.scrollTo(0, 0); The scrollTo method is not new, it’s just this particular use of it that is. For the uninitiated, scrollTo is designed to scroll a document to a particular set of coordinates, assuming the document is large enough to scroll to that spot. The method accepts two arguments: a left coordinate; and a top coordinate. It’s both simple and supported well pretty much everywhere. In iOS and Android, these coordinates are calculated from the top of the browser’s viewport, just below the address bar (interestingly, it seems that some platforms like BlackBerry 6 treat the top of the browser chrome as 0 instead, meaning the page content is closer to 20px from the top). Anyway, by passing the coordinates 0, 0 to the scrollTo method, the browser will jump to the top of the page and pull the address bar out of view! Of course, if a quick call to scrollTo was all we need to do to hide the address bar in iOS and Android, this article would be pretty short, and nothing new. Unfortunately, the first issue we need to deal with is that this method alone will not usually do the trick: it must be called after the page has finished loading. The browser gives us a load event for just that purpose, so we’ll wrap our scrollTo method in it and continue on our merry way! We’ll use the standard, addEventListener method to bind the the load event, passing arguments for event name load, and a callback function to execute when the event is triggered. window.addEventListener("load",function() { window.scrollTo(0, 0); }); For the sake of preventing errors in those using browsers that don’t support addEventListener, such as Internet Explorer 8 and under, let’s make sure that method exists before we use it: if( window.addEventListener ){ window.addEventListener("load",function() { window.scrollTo(0, 0); }); } Now we’re getting somewhere, but we must also call the method after the load event’s default behavior has been applied. For this, we can use the setTimeout method, delaying its execution to after the load event has run its course. if( window.addEventListener ){ window.addEventListener("load",function() { setTimeout(function(){ window.scrollTo(0, 0); }, 0); }); } Sweet sugar of Christmas! Hit this demo in iOS and watch that address bar drift up and away! Not so fast… We’ve got a little problem: the approach above does work in iOS but, in some cases, it works a little too well. In the process of applying this behavior, we’ve broken one of the primary tenets of responsible web development: don’t break the browser’s default behaviour. This usability rule of thumb is often violated by developers with even the best of intentions, from breaking the browser’s back button through unrecorded Ajax page refreshes, to fancy momentum touch scrolling scripts that can wreak havoc in all but the most sophisticated of devices. In this case, we’ve prevented the browser’s native support of deep-linking to sections of a page (a hash identifier in the URL matching a page element’s id attribute, for example, http://example.com#contact) from working properly, because our script always scrolls to the top. To avoid this collision, we’ll need to detect whether a deep link, or hash, is present in the URL before applying our logic. We can do this by ensuring that the location.hash property is falsey: if( !window.location.hash && window.addEventListener ){ window.addEventListener( "load",function() { setTimeout(function(){ window.scrollTo(0, 0); }, 0); }); } Still works great! And a quick test using a hash-based URL confirms that our script will not execute when a deep anchor is in play. Now iOS is looking sharp, and we’ve added our feature defensively to avoid conflicts. Now, on to Android… Wait. You didn’t expect that we could write code for one browser and be finished, right? Of course you didn’t. I mentioned earlier that Android uses the same method for getting rid of the scrollbar, but I left out the fact that the arguments it prefers vary slightly, but significantly, from iOS. Bah! Differering from the earlier logic from iOS, to remove the address bar on Android’s default browser, you need to pass a Y coordinate of 1 instead of 0. Aside from being just plain odd, this is particularly unfortunate because to any other browser on the planet, 1px is a very real, however small, distance from the top of the page! window.scrollTo( 0, 1 ); Looks like we’re going to need a fork… R UA Android? At this point, some developers might decide to simply not support this feature in Android, and more determined devs might decide that a quick check of the User Agent string would be a reliable way to determine the browser and tweak the scroll value accordingly. Neither of those decisions would be tragic, but in the spirit of cross-browser and future-friendly development, I’ll propose an alternative. By this point, it should be clear that neither of the implementations above offer a particularly intuitive way to hide an address bar. As such, one might be skeptical that these approaches will stick around very long in their present state in either browser. Perhaps at some point, Android will decide to use 0 like iOS, making our lives a little easier, or maybe some new browser will decide to model their address bar hiding method after one of these implementations. In any case, detecting the User Agent only allows us to apply logic based on the known present, and in the world of mobile, let’s face it, the present is already the past. Writing a check In this next step of today’s technique, we’ll apply some logic to quickly determine the behavior model of the browser we’re using, then capitalize on that model – without caring which browser it happens to come from – by applying the appropriate scroll distance. To do this, we’ll rely on a fortunate side effect of Android’s implementation, which is when you programatically scroll the page to 1 using scrollTo, Android will report that it’s still at 0 because oddly enough, it is! Of course, any other browser in this situation will report a scroll distance of 1. Thus, by scrolling the page to 1, then asking the browser its scroll distance, we can use this artifact of their wacky implementation to our advantage and scroll to the location that makes sense for the browser in play. Getting the scroll distance To pull off our test, we’ll need to ask the browser for its current scroll distance. The methods for getting scroll distance are not entirely standardized across popular browsers, so we’ll need to use some cross-browser logic. The following scroll distance function is similar to what you’d find in a library like jQuery. It checks the few common ways of getting scroll distance before eventually falling back to 0 for safety’s sake (that said, I’m unaware of any browsers that won’t return a numeric value from one of the first three properties). // scrollTop getter function getScrollTop(){ return scrollTop = window.pageYOffset || document.compatMode === "CSS1Compat" && document.documentElement.scrollTop || document.body.scrollTop || 0; } In order to execute that code above, the body object (referenced here as document.body) will need to be defined already, or we’ll risk an error. To determine that it’s defined, we can run a quick timer to execute code as soon as that object is defined and ready for use. var bodycheck = setInterval(function(){ if( document.body ){ clearInterval( bodycheck ); //more logic can go here!! } }, 15 ); Above, we’ve defined a 15 millisecond interval called bodycheck that checks if document.body is defined and, if so, clears itself of running again. Within that if statement, we can extend our logic further to run other code, such as our check for the scroll distance, defined via the variable scrollTop below: var scrollTop, bodycheck = setInterval(function(){ if( document.body ){ clearInterval( bodycheck ); scrollTop = getScrollTop(); } }, 15 ); With this working, we can immediately scroll to 1, then check the scroll distance when the body is defined. If the distance reports 1, we’re likely in a non-Android browser, so we’ll scroll back to 0 and clean up our mess. window.scrollTo( 0, 1 ); var scrollTop, bodycheck = setInterval(function(){ if( document.body ){ clearInterval( bodycheck ); scrollTop = getScrollTop(); window.scrollTo( 0, scrollTop === 1 ? 0 : 1 ); } }, 15 ); Cashing in All of the pieces are written now, so all we need to do is combine them with our previous logic for scrolling when the window is loaded, and we’ll have a cross-browser solution of which John Resig would be proud. Here’s our combined code snippet, with some formatting updates rolled in as well: (function( win ){ var doc = win.document; // If there’s a hash, or addEventListener is undefined, stop here if( !location.hash && win.addEventListener ){ //scroll to 1 window.scrollTo( 0, 1 ); var scrollTop = 1, getScrollTop = function(){ return win.pageYOffset || doc.compatMode = "CSS1Compat" && doc.documentElement.scrollTop || doc.body.scrollTop || 0; }, //reset to 0 on bodyready, if needed bodycheck = setInterval(function(){ if( doc.body ){ clearInterval( bodycheck ); scrollTop = getScrollTop(); win.scrollTo( 0, scrollTop = 1 ? 0 : 1 ); } }, 15 ); win.addEventListener( “load”, function(){ setTimeout(function(){ //reset to hide addr bar at onload win.scrollTo( 0, scrollTop === 1 ? 0 : 1 ); }, 0); } ); } })( this ); View code example And with that, we’ve got a bunch more room to play with on both iOS and Android. Break out the eggnog …because we’re not done yet! In the spirit of making our script act more defensively, there’s still another use case to consider. It was essential that we used the window’s load event to trigger our scripting, but on pages with a lot of content, its use can come at a cost. Often, a user will begin interacting with a page, scrolling down as they read, before the load event has fired. In those situations, our script will jump the user back to the top of the page, resulting in a jarring experience. To prevent this problem from occurring, we’ll need to ensure that the page has not been scrolled beyond a certain amount. We can add a simple check using our getScrollTop function again, this time ensuring that its value is not greater than 20 pixels or so, accounting for a small tolerance. if( getScrollTop() < 20 ){ //reset to hide addr bar at onload window.scrollTo( 0, scrollTop === 1 ? 0 : 1 ); } And with that, we’re pretty well protected! Here’s a final demo. The completed script can be found on Github (full source: https://gist.github.com/1183357 ). It’s MIT licensed. Feel free to use it anywhere or any way you’d like! Your thoughts? I hope this article provides you with a browser-agnostic approach to hiding the address bar that you can use in your own projects today. Perhaps alternatively, the complications involved in this approach convinced you that doing this well is more trouble than it’s worth and, depending on the use case, that could be a fair decision. But at the very least, I hope this demonstrates that there’s a lot of work involved in pulling off this small task in only two major platforms, and that there’s a real need for standardization in this area. Feel free to leave a comment or criticism and I’ll do my best to answer in a timely fashion. Thanks, everyone! Some parting notes I scream, you scream… At the time of writing, I was not able to test this method on the latest Android 4.0 (Ice Cream Sandwich) build. According to Sencha Touch’s browser scorecard, the browser in 4.0 may have a different way of managing the address bar, so I’ll post in the comments once I get a chance to dig into it further. Short pages get no love Today’s technique only works when the page is as tall, or taller than, the device’s available screen height, so that the address bar may be scrolled out of view. On a short page, you might work around this issue by applying a minimum height to the body element ( body { min-height: 460px; } ), but given the variety of screen sizes out there, not to mention changes in orientation, it’s tough to find a value that makes much sense (unless you manipulate it with JavaScript). 2011 Scott Jehl scottjehl 2011-12-20T00:00:00+00:00 https://24ways.org/2011/raising-the-bar-on-mobile/ design
317 Putting the World into "World Wide Web" Despite the fact that the Web has been international in scope from its inception, the predominant mass of Web sites are written in English or another left-to-right language. Sites are typically designed visually for Western culture, and rely on an enormous body of practices for usability, information architecture and interaction design that are by and large centric to the Western world. There are certainly many reasons this is true, but as more and more Web sites realize the benefits of bringing their products and services to diverse, global markets, the more demand there will be on Web designers and developers to understand how to put the World into World Wide Web. Internationalization According to the W3C, Internationalization is: “…the design and development of a product, application or document content that enables easy localization for target audiences that vary in culture, region, or language.” Many Web designers and developers have at least heard, if not read, about Internationalization. We understand that the Web is in fact worldwide, but many of us never have the opportunity to work with Internationalization. Or, when we do, think of it in purely technical terms, such as “which character set do I use?” At first glance, it might seem to many that Internationalization is the act of making Web sites available to international audiences. And while that is in fact true, this isn’t done by broad-stroking techniques and technologies. Instead, it involves a far more narrow understanding of geographical, cultural and linguistic differences in specific areas of the world. This is referred to as localization and is the act of making a Web site make sense in the context of the region, culture and language(s) the people using the site are most familiar with. Internationalization itself includes the following technical tasks: Ensuring no barrier exists to the localization of sites. Of critical importance in the planning stages of a site for Internationalized audiences, the role of the developer is to ensure that no barrier exists. This means being able to perform such tasks as enabling Unicode and making sure legacy character encodings are properly handled. Preparing markup and CSS with Internationalization in mind. The earlier in the site development process this occurs, the better. Issues such as ensuring that you can support bidirectional text, identifying language, and using CSS to support non-Latin typographic features. Enabling code to support local, regional, language or culturally related references. Examples in this category would include time/date formats, localization of calendars, numbering systems, sorting of lists and managing international forms of addresses. Empowering the user. Sites must be architected so the user can easily choose or implement the localized alternative most appropriate to them. Localization According to the W3C, Localization is the: …adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market (a “locale”). So here’s where we get down to thinking about the more sociological and anthropological concerns. Some of the primary localization issues are: Numeric formats. Different languages and cultures use numbering systems unlike ours. So, any time we need to use numbers, such as in an ordered list, we have to have a means of representing the accurate numbering system for the locale in question. Money, honey! That’s right. I’ve got a pocketful of ugly U.S. dollars (why is U.S. money so unimaginative?). But I also have a drawer full of Japanese Yen, Australian Dollars, and Great British Pounds. Currency, how it’s calculated and how it’s represented is always a consideration when dealing with localization. Using symbols, icons and colors properly. Using certain symbols or icons on sites where they might offend or confuse is certainly not in the best interest of a site that wants to sell or promote a product, service or information type. Moreover, the colors we use are surprisingly persuasive – or detrimental. Think about colors that represent death, for example. In many parts of Asia, white is the color of death. In most of the Western world, black represents death. For Catholic Europe, shades of purple (especially lavender) have represented Christ on the cross and mourning since at least Victorian times. When Walt Disney World Europe launched an ad campaign using a lot of purple and very glitzy imagery, millions of dollars were lost as a result of this seeming subtle issue. Instead of experiencing joy and celebration at the ads, the European audience, particularly the French, found the marketing to be overly American, aggressive, depressing and basically unappealing. Along with this and other cultural blunders, Disney Europe has become a well-known case study for businesses wishing to become international. By failing to understand localization differences, and how powerful color and imagery act on the human psyche, designers and developers are put to more of a disadvantage when attempting to communicate with a given culture. Choosing appropriate references to objects and ideas. What seems perfectly natural in one culture in terms of visual objects and ideas can get confused in another environment. One of my favorite cases of this has to do with Gerber baby food. In the U.S., the baby food is marketed using a cute baby on the package. Most people in the U.S. culturally do not make an immediate association that what is being represented on the label is what is inside the container. However, when Gerber expanded to Africa, where many people don’t read, and where visual associations are less abstract, people made the inference that a baby on the cover of a jar of food represented what is in fact in the jar. You can imagine how confused and even angry people became. Using such approaches as a marketing ploy in the wrong locale can and will render the marketing a failure. As you can see, the act of localization is one that can have profound impact on the success of a business or organization as it seeks to become available to more and more people across the globe. Rethinking Design in the Context of Culture While well-educated designers and those individuals working specifically for companies that do a lot of localization understand these nuances, most of us don’t get exposed to these ideas. Yet, we begin to see how necessary it becomes to have an awareness of not just the technical aspects of Internationalization, but the socio-cultural ones within localization. What’s more, the bulk of information we have when it comes to designing sites typically comes from studies and work done on sites built in English and promoted to Western culture at large. We’re making a critical mistake by not including diverse languages and cultural issues within our usability and information architecture studies. Consider the following design from the BBC: In this case, we’re dealing with English, which is read left to right. We are also dealing with U.K. cultural norms. Notice the following: Location of of navigation Use of the color red Use of diverse symbols Mix of symbols, icons and photos Location of Search Now look at this design, which is the Arabic version of the BBC News, read right to left, and dealing with cultural norms within the Arabic-speaking world. Notice the following: Location of of navigation (location switches to the right) Use of the color blue (blue is considered the “safest” global color) No use of symbols and icons whatsoever Limitation of imagery to photos In most cases, the photos show people, not objects Location of Search Admittedly, some choices here are more obvious than others in terms of why they were made. But one thing that stands out is that the placement of search is the same for both versions. Is this the result of a specific localization decision, or based on what we believe about usability at large? This is exactly the kind of question that designers working on localization have to seek answers to, instead of relying on popular best practices and belief systems that exist for English-only Web sites. It’s a Wide World Web After All From this brief article on Internationalization, it becomes apparent that the art and science of creating sites for global audiences requires a lot more preparation and planning than one might think at first glance. Developers and designers not working to address these issues specifically due to time or awareness will do well to at least understand the basic process of making sites more culturally savvy, and better prepared for any future global expansion. One thing is certain: We not only are on a dramatic learning curve for designing and developing Web sites as it is, the need to localize sites is going to become more and more a part of the day to day work. Understanding aspects of what makes a site international and local will not only help you expand your skill set and make you more marketable, but it will also expand your understanding of the world and the people within it, how they relate to and use the Web, and how you can help make their experience the best one possible. 2005 Molly Holzschlag mollyholzschlag 2005-12-09T00:00:00+00:00 https://24ways.org/2005/putting-the-world-into-world-wide-web/ ux
54 Putting My Patterns through Their Paces Over the last few years, the conversation around responsive design has shifted subtly, focusing not on designing pages, but on patterns: understanding the small, reusable elements that comprise a larger design system. And given that many of those patterns are themselves responsive, learning to manage these small layout systems has become a big part of my work. The thing is, the more pattern-driven work I do, the more I realize my design process has changed in a number of subtle, important ways. I suppose you might even say that pattern-driven design has, in a few ways, redesigned me. Meet the Teaser Here’s a recent example. A few months ago, some friends and I redesigned The Toast. (It was a really, really fun project, and we learned a lot.) Each page of the site is, as you might guess, stitched together from a host of tiny, reusable patterns. Some of them, like the search form and footer, are fairly unique, and used once per page; others are used more liberally, and built for reuse. The most prevalent example of these more generic patterns is the teaser, which is classed as, uh, .teaser. (Look, I never said I was especially clever.) In its simplest form, a teaser contains a headline, which links to an article: Fairly straightforward, sure. But it’s just the foundation: from there, teasers can have a byline, a description, a thumbnail, and a comment count. In other words, we have a basic building block (.teaser) that contains a few discrete content types – some required, some not. In fact, very few of those pieces need to be present; to qualify as a teaser, all we really need is a link and a headline. But by adding more elements, we can build slight variations of our teaser, and make it much, much more versatile. Nearly every element visible on this page is built out of our generic “teaser” pattern. But the teaser variation I’d like to call out is the one that appears on The Toast’s homepage, on search results or on section fronts. In the main content area, each teaser in the list features larger images, as well as an interesting visual treatment: the byline and comment count were the most prominent elements within each teaser, appearing above the headline. The approved visual design of our teaser, as it appears on lists on the homepage and the section fronts. And this is, as it happens, the teaser variation that gave me pause. Back in the old days – you know, like six months ago – I probably would’ve marked this module up to match the design. In other words, I would’ve looked at the module’s visual hierarchy (metadata up top, headline and content below) and written the following HTML: <div class="teaser"> <p class="article-byline">By <a href="#">Author Name</a></p> <a class="comment-count" href="#">126 <i>comments</i></a> <h1 class="article-title"><a href="#">Article Title</a></h1> <p class="teaser-excerpt">Lorem ipsum dolor sit amet, consectetur…</p> </div> But then I caught myself, and realized this wasn’t the best approach. Moving Beyond Layout Since I’ve started working responsively, there’s a question I work into every step of my design process. Whether I’m working in Sketch, CSSing a thing, or researching a project, I try to constantly ask myself: What if someone doesn’t browse the web like I do? …Okay, that doesn’t seem especially fancy. (And maybe you came here for fancy.) But as straightforward as that question might seem, it’s been invaluable to so many aspects of my practice. If I’m working on a widescreen layout, that question helps me remember the constraints of the small screen; if I’m working on an interface that has some enhancements for touch, it helps me consider other input modes as I work. It’s also helpful as a reminder that many might not see the screen the same way I do, and that accessibility (in all its forms) should be a throughline for our work on the web. And that last point, thankfully, was what caught me here. While having the byline and comment count at the top was a lovely visual treatment, it made for a terrible content hierarchy. For example, it’d be a little weird if the page was being read aloud in a speaking browser: the name of the author and the number of comments would be read aloud before the title of the article with which they’re associated. That’s why I find it’s helpful to begin designing a pattern’s hierarchy before its layout: to move past the visual presentation in front of me, and focus on the underlying content I’m trying to support. In other words, if someone’s encountering my design without the CSS I’ve written, what should their experience be? So I took a step back, and came up with a different approach: <div class="teaser"> <h1 class="article-title"><a href="#">Article Title</a></h1> <h2 class="article-byline">By <a href="#">Author Name</a></h2> <p class="teaser-excerpt"> Lorem ipsum dolor sit amet, consectetur… <a class="comment-count" href="#">126 <i>comments</i></a> </p> </div> Much, much better. This felt like a better match for the content I was designing: the headline – easily most important element – was at the top, followed by the author’s name and an excerpt. And while the comment count is visually the most prominent element in the teaser, I decided it was hierarchically the least critical: that’s why it’s at the very end of the excerpt, the last element within our teaser. And with some light styling, we’ve got a respectable-looking hierarchy in place: Yeah, you’re right – it’s not our final design. But from this basic-looking foundation, we can layer on a bit more complexity. First, we’ll bolster the markup with an extra element around our title and byline: <div class="teaser"> <div class="teaser-hed"> <h1 class="article-title"><a href="#">Article Title</a></h1> <h2 class="article-byline">By <a href="#">Author Name</a></h2> </div> … </div> With that in place, we can use flexbox to tweak our layout, like so: .teaser-hed { display: flex; flex-direction: column-reverse; } flex-direction: column-reverse acts a bit like a change in gravity within our teaser-hed element, vertically swapping its two children. Getting closer! But as great as flexbox is, it doesn’t do anything for elements outside our container, like our little comment count, which is, as you’ve probably noticed, still stranded at the very bottom of our teaser. Flexbox is, as you might already know, wonderful! And while it enjoys incredibly broad support, there are enough implementations of old versions of Flexbox (in addition to plenty of bugs) that I tend to use a feature test to check if the browser’s using a sufficiently modern version of flexbox. Here’s the one we used: var doc = document.body || document.documentElement; var style = doc.style; if ( style.webkitFlexWrap == '' || style.msFlexWrap == '' || style.flexWrap == '' ) { doc.className += " supports-flex"; } Eagle-eyed readers will note we could have used @supports feature queries to ask browsers if they support certain CSS properties, removing the JavaScript dependency. But since we wanted to serve the layout to IE we opted to write a little question in JavaScript, asking the browser if it supports flex-wrap, a property used elsewhere in the design. If the browser passes the test, then a class of supports-flex gets applied to our html element. And with that class in place, we can safely quarantine our flexbox-enabled layout from less-capable browsers, and finish our teaser’s design: .supports-flex .teaser-hed { display: flex; flex-direction: column-reverse; } .supports-flex .teaser .comment-count { position: absolute; right: 0; top: 1.1em; } If the supports-flex class is present, we can apply our flexbox layout to the title area, sure – but we can also safely use absolute positioning to pull our comment count out of its default position, and anchor it to the top right of our teaser. In other words, the browsers that don’t meet our threshold for our advanced styles are left with an attractive design that matches our HTML’s content hierarchy; but the ones that pass our test receive the finished, final design. And with that, our teaser’s complete. Diving Into Device-Agnostic Design This is, admittedly, a pretty modest application of flexbox. (For some truly next-level work, I’d recommend Heydon Pickering’s “Flexbox Grid Finesse”, or anything Zoe Mickley Gillenwater publishes.) And for such a simple module, you might feel like this is, well, quite a bit of work. And you’d be right! In fact, it’s not one layout, but two: a lightly styled content hierarchy served to everyone, with the finished design served conditionally to the browsers that can successfully implement it. But I’ve found that thinking about my design as existing in broad experience tiers – in layers – is one of the best ways of designing for the modern web. And what’s more, it works not just for simple modules like our teaser, but for more complex or interactive patterns as well. Open video Even a simple search form can be conditionally enhanced, given a little layered thinking. This more layered approach to interface design isn’t a new one, mind you: it’s been championed by everyone from Filament Group to the BBC. And with all the challenges we keep uncovering, a more device-agnostic approach is one of the best ways I’ve found to practice responsive design. As Trent Walton once wrote, Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability. We have a weird job, working on the web. We’re designing for the latest mobile devices, sure, but we’re increasingly aware that our definition of “smartphone” is much too narrow. Browsers have started appearing on our wrists and in our cars’ dashboards, but much of the world’s mobile data flows over sub-3G networks. After all, the web’s evolution has never been charted along a straight line: it’s simultaneously getting slower and faster, with devices new and old coming online every day. With all the challenges in front of us, including many we don’t yet know about, a more device-agnostic, more layered design process can better prepare our patterns – and ourselves – for the future. (It won’t help you get enough to eat at holiday parties, though.) 2015 Ethan Marcotte ethanmarcotte 2015-12-10T00:00:00+00:00 https://24ways.org/2015/putting-my-patterns-through-their-paces/ code
27 Putting Design on the Map The web can leave us feeling quite detached from the real world. Every site we make is really just a set of abstract concepts manifested as tools for communication and expression. At any minute, websites can disappear, overwritten by a newfangled version or simply gone. I think this is why so many of us have desires to create a product, write a book, or play with the internet of things. We need to keep in touch with the physical world and to prove (if only to ourselves) that we do make real things. I could go on and on about preserving the web, the challenges of writing a book, or thoughts about how we can deal with the need to make real things. Instead, I’m going to explore something that gives us a direct relationship between a website and the physical world – maps. A map does not just chart, it unlocks and formulates meaning; it forms bridges between here and there, between disparate ideas that we did not know were previously connected. Reif Larsen, The Selected Works of T.S. Spivet The simplest form of map on a website tends to be used for showing where a place is and often directions on how to get to it. That’s an incredibly powerful tool. So why is it, then, that so many sites just plonk in a default Google Map and leave it as that? You wouldn’t just use dark grey Helvetica on every site, would you? Where’s the personality? Where’s the tailored experience? Where is the design? Jumping into design Let’s keep this simple – we all want to be better web folk, not cartographers. We don’t need to go into the history, mathematics or technology of map making (although all of those areas are really interesting to research). For the sake of our sanity, I’m going to gloss over some of the technical areas and focus on the practical concepts. Tiles If you’ve ever noticed a map loading in sections, it’s because it uses tiles that are downloaded individually instead of requiring the user to download everything that they might need. These tiles come in many styles and can be used for anything that covers large areas, such as base maps and data. You’ve seen examples of alternative base maps when you use Google Maps as Google provides both satellite imagery and road maps, both of which are forms of base maps. They are used to provide context for the real world, or any other world for that matter. A marker on a blank page is useless. The tiles are representations of the physical; they do not have to be photographic imagery to provide context. This means you can design the map itself. The easiest way to conceive this is by comparing Google’s road maps with Ordnance Survey road maps. Everything about the two maps is different: the colours, the label fonts and the symbols used. Yet they still provide the exact same context (other maps may provide different context such as terrain contours). Comparison of Google Maps (top) and the Ordnance Survey (bottom). Carefully designing the base map tiles is as important as any other part of the website. The most obvious, yet often overlooked, aspect are aesthetics and branding. Maps could fit in with the rest of the site; for example, by matching the colours and line weights, they can enhance the full design rather than inhibiting it. You’re also able to define the exact purpose of the map, so instead of showing everything you could specify which symbols or labels to show and hide. I’ve not done any real research on the accessibility of base maps but, having looked at some of the available options, I think a focus on the typography of labels and the colour of the various elements is crucial. While you can choose to hide labels, quite often they provide the data required to make sense of the map. Therefore, make sure each zoom level is not too cluttered and shows enough to give context. Also be as careful when choosing the typeface as you are in any other design work. As for colour, you need to pay closer attention to issues like colour-blindness when using colour to convey information. Quite often a spectrum of colour will be used to show data, or to show the topography, so you need to be aware that some people struggle to see colour differences within a spectrum. A nice example of a customised base map can be found on Michael K Owens’ check-in pages: One of Michael K Owens’ check-in pages. As I’ve already mentioned, tiles are not just for base maps: they are also for data. In the screenshot below you can see how Plymouth Marine Laboratory uses tiles to show data with a spectrum of colour. A map from the Marine Operational Ecology data portal, showing data of adult cod in the North Sea. Technical You’re probably wondering how to design the base layers. I will briefly explain the concepts here and give you tools to use at the end of the article. If you’re worried about the time it takes to design the maps, don’t be – you can automate most of it. You don’t need to manually draw each tile for the entire world! We’ve learned the importance of web standards the hard way, so you’ll be glad (and I won’t have to explain the advantages) of the standard for web mapping from the Open Geospatial Consortium (OGC) called the Web Map Service (WMS). You can use conventional file formats for the imagery but you need a way to query for the particular tiles to show for the area and zoom level, that is what WMS does. Features Tiles are great for covering large areas but sometimes you need specific smaller areas. We call these features and they usually consist of polygons, lines or points. Examples include postcode boundaries and routes between places, or even something more dynamic such as borders of nations changing over time. Showing features on a map presents interesting design challenges. If the colour or shape conveys some kind of data beyond geographical boundaries then it needs to be made obvious. This is actually really hard, without building complicated user interfaces. For example, in the image below, is it obvious that there is a relationship between the colours? Does it need a way of showing what the colours represent? Choropleth map showing ranked postcode areas, using ViziCities. Features are represented by means of lines or colors; and the effective use of lines or colors requires more than knowledge of the subject – it requires artistic judgement. Erwin Josephus Raisz, cartographer (1893–1968) Where lots of boundaries are small and close together (such as a high street or shopping centre) will it be obvious where the boundaries are and what they represent? When designing maps, the hardest challenge is dealing with how the data is represented and how it is understood by the user. Technical As you probably gathered, we use WMS for tiles and another standard called the web feature service (WFS) for specific features. I need to stress that the difference between the two is that WMS is for tiling, whereas WFS is for specific features. Both can use similar file formats but should be used for their particular use cases. You may be wondering why you can’t just use a vector format such as KML, GeoJSON (or even SVG) – and you can – but the issue is the same as for WMS: you need a way to query the data to get the correct area and zoom level. User interface There is of course never a correct way to design an interface as there are so many different factors to take into consideration for each individual project. Maps can be used in a variety of ways, to provide simple information about directions or for complex visualisations to explain large amounts of data. I would like to just touch on matters that need to be taken into account when working with maps. As I mentioned at the beginning, there are so many Google Maps on the web that people seem to think that its UI is the only way you can use a map. To some degree we don’t want to change that, as people know how to use them; but does every map require a zoom slider or base map toggle? In fact, does the user need to zoom at all? The answer to that one is generally yes, zooming does provide more context to where the map is zoomed in on. In some cases you will need to let users choose what goes on the map (such as data layers or directions), so how do they show and hide the data? Does a simple drop-down box work, or do you need search? Google’s base map toggle is quite nice since it doesn’t offer many options yet provides very different contexts and styling. It isn’t until we get to this point that we realise just plonking a quick Google map is really quite ridiculous, especially when compared to the amount of effort we make in other areas such as colour, typography or how the CSS is written. Each of these is important but we need to make sure the whole site is designed, and that includes the maps as much as any other content. Putting it into practice I could ramble on for ages about what we can do to customise maps to fit a site’s personality and correctly represent the data. I wanted to focus on concepts and standards because tools constantly change and it is never good to just rely on a tool to do the work. That said, there are a large variety of tools that will help you turn these concepts into reality. This is not a comparison; I just want to show you a few of the many options you have for maps on the web. Google OK, I’ve been quite critical so far about Google Maps but that is only because there is such a large amount of the default maps across the web. You can style them almost as much as anything else. They may not allow you to use custom WMS layers but Google Maps does have its own version, called styled maps. Using an array of map features (in the sense of roads and lakes and landmarks rather than the kind WFS is used for), you can style the base map with JavaScript. It even lets you toggle visibility, which helps to avoid the issue of too much clutter on the map. As well as lacking WMS, it doesn’t support WFS, but it does support GeoJSON and KML so you can still show the features on the map. You should also check out Google Maps Engine (the new version of My Maps), which provides an interface for creating more advanced maps with a selection of different base maps. A premium version is available, essentially for creating map-based visualisations, and it provides a step up from the main Google Maps offering. A useful feature in some cases is that it gives you access to many datasets. Leaflet You have probably seen Leaflet before. It isn’t quite as popular as Google Maps but it is definitely used often and for good reason. Leaflet is a lightweight open source JavaScript library. It is not a service so you don’t have to worry about API throttling and longevity. It gives you two options for tiling, the ability to use WMS, or to directly get the file using variables in the filename such as /{z}/{x}/{y}.png. I would recommend using WMS over dynamic file names because it is a standard, but the ability to use variables in a file name could be useful in some situations. Leaflet has a strong community and a well-documented API. Mapbox As a freemium service, Mapbox may not be perfect for every use case but it’s definitely worth looking into. The service offers incredible customisation tools as well as lots of data sources and hosting for the maps. It also provides plenty of libraries for the various platforms, so you don’t have to only use the maps on the web. Mapbox is a service, though its map design tool is open source. Mapbox Studio is a vector-only version of their previous tool called Tilemill. Earlier I wrote about how typography and colour are as important to maps as they are to the rest of a website; if you thought, “Yes, but how on earth can I design those parts of a map?” then this is the tool for you. It is incredibly easy to use. Essentially each map has a stylesheet. If you do not want to open a paid-for Mapbox account, then you can export the tiles (as PNG, SVG etc.) to use with other map tools. OpenLayers After a long wait, OpenLayers 3 has been released. It is similar to Leaflet in that it is a library not a service, but it has a much broader scope. During the last year I worked on the GIS portal at Plymouth Marine Laboratory (which I used to show the data tiles earlier), it essentially used OpenLayers 2 to create a web-based geographic information system, taking a large amount of data and permitting analysis (such as graphs) without downloading entire datasets and complicated software. OpenLayers 3 has improved greatly on the previous version in both performance and accessibility. It is the ideal tool for complex map-based web apps, though it can be used for the simple use cases too. OpenStreetMap I couldn’t write an article about maps on the web without at least mentioning OpenStreetMap. It is the place to go for crowd-sourced data about any location, with complete road maps and a strong API. ViziCities The newest project on this list is ViziCities by Robin Hawkes and Peter Smart. It is a open source 3-D visualisation tool, currently in the very early stages of development. The basic example shows 3-D buildings around the world using OpenStreetMap data. Robin has used it to create some incredible demos such as real-time London underground trains, and planes landing at an airport. Edward Greer and I are currently working on using ViziCities to show ideal housing areas based on particular personas. We chose it because the 3-D aspect gives us interesting possibilities for the data we are able to visualise (such as bar charts on the actual map instead of in the UI). Despite not being a completely stable, fully featured system, ViziCities is worth taking a look at for some use cases and is definitely going to go from strength to strength. So there you have it – a whistle-stop tour of how maps can be customised. Now please stop plonking in maps without thinking about it and design them as you design the rest of your content. 2014 Shane Hudson shanehudson 2014-12-11T00:00:00+00:00 https://24ways.org/2014/putting-design-on-the-map/ design
218 Put Yourself in a Corner Some backstory, and a shameful confession For the first couple years of high school I was one of those jerks who made only the minimal required effort in school. Strangely enough, how badly I behaved in a class was always in direct proportion to how skilled I was in the subject matter. In the subjects where I was confident that I could pass without trying too hard, I would give myself added freedom to goof off in class. Because I was a closeted lit-nerd, I was most skilled in English class. I’d devour and annotate required reading over the weekend, I knew my biblical and mythological allusions up and down, and I could give you a postmodern interpretation of a text like nobody’s business. But in class, I’d sit in the back and gossip with my friends, nap, or scribble patterns in the margins of my textbooks. I was nonchalant during discussion, I pretended not to listen during lectures. I secretly knew my stuff, so I did well enough on tests, quizzes, and essays. But I acted like an ass, and wasn’t getting the most I could out of my education. The day of humiliation, but also epiphany One day in Ms. Kaney’s AP English Lit class, I was sitting in the back doodling. An earbud was dangling under my sweater hood, attached to the CD player (remember those?) sitting in my desk. Because of this auditory distraction, the first time Ms. Kaney called my name, I barely noticed. I definitely heard her the second time, when she didn’t call my name so much as roar it. I can still remember her five feet frame stomping across the room and grabbing an empty desk. It screamed across the worn tile as she slammed it next to hers. She said, “This is where you sit now.” My face gets hot just thinking about it. I gathered my things, including the CD player (which was now impossible to conceal), and made my way up to the newly appointed Seat of Shame. There I sat, with my back to the class, eye-to-eye with Ms. Kaney. From my new vantage point I couldn’t see my friends, or the clock, or the window. All I saw were Ms. Kaney’s eyes, peering at me over her reading glasses while I worked. In addition to this punishment, I was told that from now on, not only would I participate in class discussions, but I would serve detention with her once a week until an undetermined point in the future. During these detentions, Ms. Kaney would give me new books to read, outside the curriculum, and added on to my normal homework. They ranged from classics to modern novels, and she read over my notes on each book. We’d discuss them at length after class, and I grew to value not only our private discussions, but the ones in class as well. After a few weeks, there wasn’t even a question of this being punishment. It was heaven, and I was more productive than ever. To the point Please excuse this sentimental story. It’s not just about honoring a teacher who cared enough to change my life, it’s really about sharing a lesson. The most valuable education Ms. Kaney gave me had nothing to do with literature. She taught me that I (and perhaps other people who share my special brand of crazy) need to be put in a corner to flourish. When we have physical and mental constraints applied, we accomplish our best work. For those of you still reading, now seems like a good time to insert a pre-emptive word of mediation. Many of you, maybe all of you, are self-disciplined enough that you don’t require the rigorous restrictions I use to maximize productivity. Also, I know many people who operate best in a stimulating and open environment. I would advise everyone to seek and execute techniques that work best for them. But, for those of you who share my inclination towards daydreams and digressions, perhaps you’ll find something useful in the advice to follow. In which I pretend to be Special Agent Olivia Dunham Now that I’m an adult, and no longer have Ms. Kaney to reign me in, I have to find ways to put myself in the corner. By rejecting distraction and shaping an environment designed for intense focus, I’m able to achieve improved productivity. Lately I’ve been obsessed with the TV show Fringe, a sci-fi series about an FBI agent and her team of genius scientists who save the world (no, YOU’RE a nerd). There’s a scene in the show where the primary character has to delve into her subconscious to do extraordinary things, and she accomplishes this by immersing herself in a sensory deprivation tank. The premise is this: when enclosed in a space devoid of sound, smell, or light, she will enter a new plane of consciousness wherein she can tap into new levels of perception. This might sound a little nuts, but to me this premise has some real-world application. When I am isolated from distraction, and limited to only the task at hand, I’m able to be productive on a whole new level. Since I can’t actually work in an airtight iron enclosure devoid of input, I find practical ways to create an interruption-free environment. Since I work from home, many of my methods for coping with distractions wouldn’t be necessary for my office-bound counterpart. However for some of you 9-to-5-ers, the principles will still apply. Consider your visual input First, I have to limit my scope to the world I can (and need to) affect. In the largest sense, this means closing my curtains to the chaotic scene of traffic, birds, the post office, a convenience store, and generally lovely weather that waits outside my window. When the curtains are drawn and I’m no longer surrounded by this view, my sphere is reduced to my desk, my TV, and my cat. Sometimes this step alone is enough to allow me to focus. But, my visual input can be whittled down further still. For example, the desk where I usually keep my laptop is littered with twelve owl figurines, a globe, four books, a three-pound weight, and various nerdy paraphernalia (hard drives, Wacom tablets, unnecessary bluetooth accessories, and so on). It’s not so much a desk as a dumping ground for wacky flea market finds and impulse technology buys. Therefore, in addition to this Official Desk, I have an adult version of Ms. Kaney’s Seat of Shame. It’s a rusty old student’s desk I picked up at the Salvation Army, almost an exact replica of the model Ms. Kaney dragged across the classroom all those years ago. This tiny reproduction Seat of Shame is literally in a corner, where my only view is a blank wall. When I truly need to focus, this is where I take refuge, with only a notebook and a pencil (and occasionally an iPad). Find out what works for your ears Even from my limited sample size of two people, I know there are lots of different ways to cope with auditory distraction. I prefer silence when focused on independent work, and usually employ some form of a white noise generator. I’ve yet to opt for the fancy ‘real’ white noise machines; instead, I use a desktop fan or our allergy filter machine. This is usually sufficient to block out the sounds of the dishwasher and the cat, which allows me to think only about the task of hand. My boyfriend, the other half of my extensive survey, swears by another method. He calls it The Wall of Sound, and it’s basically an intense blast of raucous music streamed directly into his head. The outcome of his technique is really the same as mine; he’s blocking out unexpected auditory input. If you can handle the grating sounds of noisy music while working, I suggest you give The Wall of Sound a try. Don’t count the minutes When I sat in the original Seat of Shame in lit class, I could no longer see the big classroom clock slowly ticking away the seconds until lunch. Without the marker of time, the class period often flew by. The same is true now when I work; the less aware of time I am, the less it feels like time is passing too quickly or slowly, and the more I can focus on the task (not how long it takes). Nowadays, to assist in my effort to forget the passing of time, I sometimes put a sticky note over the clock on my monitor. If I’m writing, I’ll use an app like WriteRoom, which blocks out everything but a simple text editor. There are situations when it’s not advisable to completely lose track of time. If I’m working on a project with an hourly rate and a tight scope, or if I need to be on time to a meeting or call, I don’t want to lose myself in the expanse of the day. In these cases, I’ll set an alarm that lets me know it’s time to reign myself back in (or on some days, take a shower). Put yourself in a mental corner, too When Ms. Kaney took action and forced me to step up my game, she had the insight to not just change things physically, but to challenge me mentally as well. She assigned me reading material outside the normal coursework, then upped the pressure by requiring detailed reports of the material. While this additional stress was sometimes uncomfortable, it pushed me to work harder than I would have had there been less of a demand. Just as there can be freedom in the limitations of a distraction-free environment, I’d argue there is liberty in added mental constraints as well. Deadlines as a constraint Much has been written about the role of deadlines in the creative process, and they seem to serve different functions in different cases. I find that deadlines usually act as an important constraint and, without them, it would be nearly impossible for me to ever consider a project finished. There are usually limitless ways to improve upon the work I do and, if there’s no imperative for me to be done at a certain point, I will revise ad infinitum. (Hence, the personal site redesign that will never end – Coming Soon, Forever!). But if I have a clear deadline in mind, there’s a point when the obsessive tweaking has to stop. I reach a stage where I have to gather up the nerve to launch the thing. Putting the pro in procrastination Sometimes I’ve found that my tendency to procrastinate can help my productivity. (Ducks, as half the internet throws things at her.) I understand the reasons why procrastination can be harmful, and why it’s usually a good idea to work diligently and evenly towards a goal. I try to divide my projects up in a practical way, and sometimes I even pull it off. But for those tasks where you work aimlessly and no focus comes, or you find that every other to-do item is more appealing, sometimes you’re forced to bring it together at the last moment. And sometimes, this environment of stress is a formula for magic. Often when I’m down to the wire and have no choice but to produce, my mind shifts towards a new level of clarity. There’s no time to endlessly browse for inspiration, or experiment with convoluted solutions that lead nowhere. Obviously a life lived perpetually on the edge of a deadline would be a rather stressful one, so it’s not a state of being I’d advocate for everyone, all the time. But every now and then, the work done when I’m down to the wire is my best. Keep one toe outside your comfort zone When I’m choosing new projects to take on, I often seek out work that involves an element of challenge. Whether it’s a design problem that will require some creative thinking, or a coding project that lends itself to using new technology like HTML5, I find a manageable level of difficulty to be an added bonus. The tension that comes from learning a new skill or rethinking an old standby is a useful constraint, as it keeps the work interesting, and ensures that I continue learning. There you have it Well, I think I’ve spilled most of my crazy secrets for forcing my easily distracted brain to focus. As with everything we web workers do, there are an infinite number of ways to encourage productivity. I hope you’ve found a few of these to be helpful, and please share your personal techniques in the comments. Have a happy and productive new year! 2010 Meagan Fisher meaganfisher 2010-12-20T00:00:00+00:00 https://24ways.org/2010/put-yourself-in-a-corner/ process
297 Public Speaking with a Buddy My book Demystifying Public Speaking focuses on the variety of fears we each have about giving a talk. From presenting to a client, to leading a team standup, to standing on a conference stage, there are lots of things we can do to prepare ourselves for the spotlight and reduce those fears. Though it didn’t make it into the final draft, I wanted to highlight how helpful it can be to share that public speaking spotlight with another person, or a few more people. If you have fears about not knowing the answer to a question, fumbling your words, or making a mistake in the spotlight, then buddying up may be for you! To some, adding more people to a presentation sounds like a recipe for on-stage disaster. To others, having a friendly face nearby—a partner who can step in if you fumble—is incredibly reassuring. As design director Yesenia Perez-Cruz writes, “While public speaking is a deeply personal activity, you don’t have to go it alone. Nothing has helped my speaking career more than turning it into a group effort.” Co-presenting can level up a talk in two ways: an additional brain and presentation skill set can improve the content of the talk itself, and you may feel safer with the on-stage safety net of your buddy. For example, when I started giving lengthy workshops about building mobile device labs with my co-worker Destiny Montague, we brought different experience to the table. I was able to talk about the user experience of our lab, and the importance of testing across different screen sizes. Destiny spoke about the hardware aspects of the lab, like power consumption and networking. Our audience benefitted from the spectrum of insight we included in the talk. Moreover, Destiny and I kept each other energized and engaging while teaching our audience, having way more fun onstage. Partnering up alleviated the risk (and fear!) of fumbling; where one person makes a mistake, the other person is right there to help. Buddy presentations can be helpful if you fear saying “I don’t know” to a question, as there are other people around you who will be able to help answer it from the stage. By partnering with someone whom I trust and respect, and whose work and knowledge augments my own, it made the experience—and the presentation!—significantly better. Co-presenting won’t work if you don’t trust the person you’re onstage with, or if you don’t have good chemistry working together. It might also not work if there’s an imbalance of responsibilities, both in preparing the talk and giving it. Read on for how to make partner talks work to your advantage! Trustworthiness If you want to explore co-presenting, make sure that your presentation partner is trustworthy and can carry their weight; it can be stressful if you find yourself trying to meet deadlines and prepare well and your partner isn’t being helpful. We’re all about reducing the fears and stress levels surrounding being in that spotlight onstage; make sure that the person you’re relying on isn’t making the process harder. Before you start working together, sketch out the breakdown of work and timeline you’re each committing to. Have a conversation about your preferred work style so you each have a concrete understanding of the best ways to communicate (in what medium, and how often) and how to check in on each other’s progress without micromanaging or worrying about radio silence. Ask your buddy how they prefer to receive feedback, and give them your own feedback preferences, so neither of you are surprised or offended when someone’s work style or deliverable needs to be tweaked. This should be a partnership in which you both feel supported; it’s healthy to set all these expectations up front, and create a space in which you can each tweak things as the work progresses. Talk flow and responsibilities There are a few different ways to organize the structure of your talk with multiple presenters. Start by thinking about the breakdown of the talk content—are there discrete parts you and the other presenters can own or deliver? Or does it feel more appropriate to deliver the entirety of the content together? If you’re finding that you can break down the content into discrete chunks, figure out who should own which pieces, and what ownership means. Will you develop the content together but have only one person present the information? Or will one person research and prepare each content section in addition to delivering it solo onstage? Rehearse how handoffs will go between sections so it feels natural, rather than stilted. I like breaking a presentation into “chapters” when I’m passionate about particular aspects of a topic and can speak on those, but know that there are other aspects to be shared and there’s someone else who can handle (and enjoy!) talking about them. When Destiny and I rehearsed our “chapter” handoffs, we developed little jingles that we’d both sing together onstage; it indicated to the audience that it was a planned transition in the content, and tied our independent work together into a partnership. .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } Alternatively, you can give the presentation in a way that’s close to having a rehearsed conversation, rather than independently presenting discrete parts of the talk. In this case, you’ll both be sharing the spotlight at the same time, throughout the duration of the talk. Preparation is key, here, to make sure that you each understand what needs to be communicated, and you have a sense of who will be taking responsibility for communicating those different pieces of information. A poorly-prepared talk like this will look like the co-presenters are talking over each other, or hesitating awkwardly to give the other person more room to speak; the audience will feel how uncomfortable this is, and will probably be distracted from the talk content. Practice the talk the whole way through multiple times so you know what each person is planning on covering and how you want to interact with each other while you’re both holding microphones; also figure out how you’ll be standing in relation to each other. More on that next! Sharing the stage If you choose to give a talk with a partner, determine ahead of time how you’ll stand (or sit). For example, if you each take “chapters” or major sections of the presentation, ensure that it’s clear who the audience should focus their attention on. You could sit in a chair off to the side (or stand). I recommend placing yourself far enough away that you’re not distracting to the audience; you don’t want them watching you while your partner is speaking. If the audience can still see you, but their focus should be on your buddy, be sure to not look distracted; keep your eyes on your buddy, and don’t just open your laptop and ignore what’s happening! Feel free to smile, laugh, or react how the audience should be reacting as your partner is speaking. If you’re both sharing the spotlight at the same time and having a rehearsed conversation, make sure that your body language engages the audience and you’re not just speaking to each other, ignoring the folks watching. Watch this talk with Guy Podjarny and Assaf Hefetz who have partnered up to talk about security; they have clearly identified roles onstage, and remain engaged with the audience. Consider whether or not you will share a microphone, or if you will both be mic’d. (Be sure that the event organizer, or the A/V team, has a heads-up well in advance to ensure they have the equipment handy!) Also talk through how you’d like to handle Q&A time during or after the talk, especially if you have clear “chapters” where Q&A might happen naturally during a handoff. The more clarity you and your partner have about who is responsible for which pieces of information sharing, the more you can feel and appear prepared. Co-presenting does take a lot of preparation and requires a ton of communication between you and your partner. But the rewards can be awesome: double the brains onstage to help answer questions and communicate information, and a friendly face to help comfort you if you feel nervous. 2016 Lara Hogan larahogan 2016-12-06T00:00:00+00:00 https://24ways.org/2016/public-speaking-with-a-buddy/ process
3 Project Hubs: A Home Base for Design Projects SCENE: A design review meeting. Laptop screens. Coffee cups. Project manager: Hey, did you get my email with the assets we’ll be discussing? Client: I got an email from you, but it looks like there’s no attachment. PM: Whoops! OK. I’m resending the files with the attachments. Check again? Client: OK, I see them. It’s homepage_v3_brian-edits_FINAL_for-review.pdf, right? PM: Yeah, that’s the one. Client: OK, hang on, Bill’s going to print them out. (3-minute pause. Small talk ensues.) Client: Alright, Bill’s back. We’re good to start. Brian: Oh, actually those homepage edits we talked about last time are in the homepage_v4_brian_FINAL_v2.pdf document that I posted to Basecamp earlier today. Client: Oh, OK. What message thread was that in? Brian: Uh, I’m pretty sure it’s in “Homepage Edits and Holiday Schedule.” Client: Alright, I see them. Bill’s going back to the printer. Hang on a sec… This is only a slightly exaggerated version of my experience in design review meetings. The design project dance is a sloppy one. It involves a slew of email attachments, PDFs, PSDs, revisions, GitHub repos, staging environments, and more. And while tools like Basecamp can help manage all these moving parts, it can still be incredibly challenging to extract only the important bits, juggle deliverables, and see how your project is progressing. Enter project hubs. Project hubs A project hub consolidates all the key design and development materials onto a single webpage presented in reverse chronological order. The timeline lives online (either publicly available or password protected), so that everyone involved in the team has easy access to it. A project hub. I was introduced to project hubs after seeing Dan Mall’s open redesign of Reading Is Fundamental. Thankfully, I had a chance to work with Dan on two projects where I got to see firsthand how beneficial a project hub can be. Here’s what makes a project hub great: Serves as a centralized home base for the project Trains clients and teams to decide in the browser Easily and visually view project’s progress Provides an archive for project artifacts A home base Your clients and colleagues can expect to get the latest and greatest updates to your project when visiting the project hub, the same way you’d expect to get the latest information on a requested topic when you visit a Wikipedia page. That’s the beauty of URIs that don’t change. Creating a project hub reduces a ton of email volley nonsense, and eliminates the need to produce files and directories with staggeringly ridiculous names like design/12.13.13/team/brian/for_review/_FINAL/styletile_121313_brian-edits-final_v2_FINAL.pdf. The team can simply visit the project hub’s URL and click the link to whatever artifact they need. Need to make an update? Simply update the link on the project hub. No more email tango and silly file names. Deciding in the browser Let’s change the phrase “designing in the browser” to “deciding in the browser.” Dan Mall We make websites, but all too often we find ourselves looking at web design artifacts in abstractions. We email PDFs to each other, glance at mockup JPGs on our desktops, and of course kill trees in order to print out designs so that we can scribble in the margins. All of these practices subtly take everyone further and further away from the design’s eventual final resting place: the browser. Because a project hub is just a simple webpage, reviewing designs is as easy as clicking some links, which keep your clients and teams in the browser. You can keep people in the browser with yet another clever trick from the wily Dan Mall: instead of sending clients PDFs or JPGs, he created a simple webpage and tossed his static visuals into the template (you can view an example here). This forces clients to review web design work in the browser rather than launching a PDF viewer or Preview. Now this all might sound trivial to you (“Of course my client knows that we’re designing a website!”), but keeping the design artifacts in the browser subconsciously helps remind everyone of the medium for which you’re designing, which helps everyone focus on the right aspects of the design and have the right conversations. Progress over time When you’re in the trenches, it’s often hard to visualize how a project is progressing. Tools like Basecamp include discussions, files, to-dos, and more, which are all great tools but also make things a bit noisy. Project hubs provide you and your clients a quick and easy way to see at a glance how things are coming along. Teams can rest assured they’re viewing the most current versions of designs, and managers can share progress with stakeholders simply by providing a link to the project hub. Over time, a project hub becomes an easily accessible archive of all the design decisions, which makes it easy to compare and contrast different versions of designs and prototypes. Setting up a project hub Setting up your own project hub is pretty simple. Simply create a webpage with some basic styles and branding. I’ve created a project hub template that’s available on GitHub if you want a jump-start. Publish the webpage to a URL somewhere that makes sense (we’ve found that a subdomain of your site works quite well) and share it with everyone involved in the project. Bookmark it. Let everyone know that this is where design updates will be shared, and that they can always come back to the project hub to track the project’s progress. When it comes time to share new updates, simply add a new node to the timeline and republish the webpage. Simple FTPing works just fine, but it might make sense to keep track of changes using version control. Our project hub for our open redesign of the Pittsburgh Food Bank is managed on GitHub, which means that I can make edits to the hub right from GitHub. Thanks to the magical wizardry of webhooks, I can automatically deploy the project hub so that everything stays in sync. That’s the fancy-pants way to do it, and is certainly not a requirement. As long as you’re able to easily make edits and keep your project hub up to date, you’re good to go. So that’s the hubbub Project hubs can help tame the chaos of the design process by providing a home base for all key design and development materials. Keep the design artifacts in the browser and give clients and colleagues quick insight into your project’s progress. Happy hubbing! 2013 Brad Frost bradfrost 2013-12-17T00:00:00+00:00 https://24ways.org/2013/project-hubs/ process
312 Preparing to Be Badass Next Year Once we’ve eaten our way through the holiday season, people will start to think about new year’s resolutions. We tend to focus on things that we want to change… and often things that we don’t like about ourselves to “fix”. We set rules for ourselves, or try to start new habits or stop bad ones. We focus in on things we will or won’t do. For many of us the list of things we “ought” to be spending time on is just plain overwhelming – family, charity/community, career, money, health, relationships, personal development. It’s kinda scary even just listing it out, isn’t it? I want to encourage you to think differently about next year. The ever-brilliant Kathy Sierra articulates a better approach really well when talking about the attitude we should have to building great products. She tells us to think not about what the user will do with our product, but about what they are trying to achieve in the real world and how our product helps them to be badass1. When we help the user be badass, then we are really making a difference. I suppose this is one way of saying: focus not on what you will do, focus on what it will help you achieve. How will it help you be awesome? In what ways do you want to be more badass next year? A professional lens Though of course you might want to focus in on health or family or charity or community or another area next year, many people will want to become more badass in their chosen career. So let’s talk about a scaffold to help you figure out your professional / career development next year. First up, an assumption: everyone wants to be awesome. Nobody gets up in the morning aiming to be crap at their job. Nobody thinks to themselves “Today I am aiming for just south of mediocre, and if I can mess up everybody else’s ability to do good work then that will be just perfect2”. Ergo, you want to be awesome. So what does awesome look like? Danger! The big trap that people fall into when think about their professional development is to immediately focus on the things that they aren’t good at. When you ask people “what do you want to work on getting better at next year?” they frequently gravitate to the things that they believe they are bad at. Why is this a trap? Because if you focus all your time and energy on improving the areas that you suck at, you are going to end up middling at everything. Going from bad → mediocre at a given skill / behaviour takes a bunch of time and energy. So if you spend all your time going from bad → mediocre at things, what do you think you end up? That’s right, mediocre. Mediocrity is not a great career goal, kids. What do you already rock at? The much better investment of time and energy is to go from good → awesome. It often takes the same amount of relative time and energy, but wow the end result is better! So first, ask yourself and those who know you well what you are already pretty damn good at. Combat imposter syndrome by asking others. Then figure out how to double down on those things. What does brilliant look like for a given skill? What’s the knowledge or practice that you need to level yourself up even further in that thing? But what if I really really suck? Admittedly, sometimes something you suck at really is holding you back. But it’s important to separate out weaknesses (just something you suck at) from controlling weaknesses (something you suck at that actually matters for your chosen career). If skill x is just not an important thing for you to be good at, you may never need to care that you aren’t good at it. If your current role or the one you aspire to next really really requires you to be great at x, then it’s worth investing your time and energy (and possibly money too) getting better at it. So when you look at the things that you aren’t good at, which of those are actually essential for success? The right ratio A good rule of thumb is to pick three things you are already good at to work on becoming awesome at and limit yourself to one weakness that you are trying to improve on. That way you are making sure that you get to awesome in areas where you already have an advantage, and limit the amount of time you are spending on going from bad → mediocre. Levelling up learning So once you’ve figured out your areas you want to focus on next year, what do you actually decide to do? Most of all, you should try to design your day-to-day work in a way that it is also an effective learning experience. This means making sure you have a good feedback loop – you get to try something, see if it works, learn from it, rinse and repeat. It’s also about balance: you want to be challenged enough for work to be interesting, without it being so hard it’s frustrating. You want to do similar / the same things often enough that you get to learn and improve, without it being so repetitive that it’s boring. Continuously getting better at things you are already good at is actually both easier and harder than it sounds. The advantage is that it’s pretty easy to add the feedback loop to make sure that you are improving; the disadvantage is that you’re already good at these skills so you could easily just “do” without ever stopping to reflect and improve. Build in time for personal retrospectives (“What went well? What didn’t? What one thing will I choose to change next time?”) and find a way of getting feedback from outside sources as well. As for the new skills, it’s worth knowing that skill development follows a particular pattern: We all start out unconsciously incompetent (we don’t know what to do and if we tried we’d unwittingly get it wrong), progress on to conscious incompetence (we now know we’re doing it wrong) then conscious competence (we’re doing it right but wow it takes effort and attention) and eventually get to unconscious competence (automatically getting it right). Your past experiences and knowledge might let you move faster through these stages, but no one gets to skip them. Invest the time and remember you need the feedback loop to really improve. What about keeping up? Everything changes very fast in our industry. We need to invest in not falling behind, in keeping on top of what great looks like. There are a bunch of ways to do this, from reading blog posts, following links on Twitter, reading books to attending conferences or workshops, or just finding time to build things in new ways or with new technologies. Which will work best for you depends on how you best learn. Do you prefer to swallow a book? Do you learn most by building or experimenting? Whatever your learning style though, remember that there are three real needs: Scan the landscape (what’s changing, does it matter) Gain the knowledge or skills (get the detail) Apply the knowledge or skills (use it in reality) When you remember that you need all three of these things it can help you get more of what you do. For me personally, I use a combination of conferences and blogs / Twitter to scan the landscape. Half of what I want out of a conference is just a list of things to have on my radar that might become important. I then pick a couple of things to go read up on more (I personally learn most effectively by swallowing a book or spec or similar). And then I pick one thing at a time to actually apply in real life, to embed the skill / knowledge. In summary Aim to be awesome (mediocrity is not a career goal). Figure out what you already rock at. Only care about stuff you suck at that matters for your career. Pick three things to go from good → awesome and one thing to go from bad → mediocre (or mediocre → good) this year. Design learning into your daily work. Scan the landscape, learn new stuff, apply it for real. Be badass! She wrote a whole book about it. You should read it: Badass: Making Users Awesome ↩ Before you argue too vehemently: I suppose some antisocial sociopathic bastards do exist. Identify them, and then RUN AWAY FAST AS YOU CAN #realtalk ↩ 2016 Meri Williams meriwilliams 2016-12-22T00:00:00+00:00 https://24ways.org/2016/preparing-to-be-badass-next-year/ business
336 Practical Microformats with hCard You’ve probably heard about microformats over the last few months. You may have even read the easily digestible introduction at Digital Web Magazine, but perhaps you’ve not found time to actually implement much yet. That’s understandable, as it can sometimes be difficult to see exactly what you’re adding by applying a microformat to a page. Sure, you’re semantically enhancing the information you’re marking up, and the Semantic Web is a great idea and all, but what benefit is it right now, today? Well, the answer to that question is simple: you’re adding lots of information that can be and is being used on the web here and now. The big ongoing battle amongst the big web companies if one of territory over information. Everyone’s grasping for as much data as possible. Some of that information many of us are cautious to give away, but a lot of is happy to be freely available. Of the data you’re giving away, it makes sense to give it as much meaning as possible, thus enabling anyone from your friends and family to the giant search company down the road to make the most of it. Ok, enough of the waffle, let’s get working. Introducing hCard You may have come across hCard. It’s a microformat for describing contact information (or really address book information) from within your HTML. It’s based on the vCard format, which is the format the contacts/address book program on your computer uses. All the usual fields are available – name, address, town, website, email, you name it. If you’re running Firefox and Greasemonkey (or if you can, just to try this out), install this user script. What it does is look for instances of the hCard microformat in a page, and then add in a link to pass any hCards it finds to a web service which will convert it to a vCard. Take a look at the About the author box at the bottom of this article. It’s a hCard, so you should be able to click the icon the user script inserts and add me to your Outlook contacts or OS X Address Book with just a click. So microformats are useful after all. Free microformats all round! Implementing hCard This is the really easy bit. All the hCard microformat is, is a bunch of predefined class names that you apply to the markup you’ve probably already got around your contact information. Let’s take the example of the About the author box from this article. Here’s how the markup looks without hCard: <div class="bio"> <h3>About the author</h3> <p>Drew McLellan is a web developer, author and no-good swindler from just outside London, England. At the <a href="http://www.webstandards.org/">Web Standards Project</a> he works on press, strategy and tools. Drew keeps a <a href="http://www.allinthehead.com/">personal weblog</a> covering web development issues and themes.</p> </div> This is a really simple example because there’s only two key bits of address book information here:- my name and my website address. Let’s push it a little and say that the Web Standards Project is the organisation I work for – that gives us Name, Company and URL. To kick off an hCard, you need a containing object with a class of vcard. The div I already have with a class of bio is perfect for this – all it needs to do is contain the rest of the contact information. The next thing to identify is my name. hCard uses a class of fn (meaning Full Name) to identify a name. As is this case there’s no element surrounding my name, we can just use a span. These changes give us: <div class="bio vcard"> <h3>About the author</h3> <p><span class="fn">Drew McLellan</span> is a web developer... The two remaining items are my URL and the organisation I belong to. The class names designated for those are url and org respectively. As both of those items are links in this case, I can apply the classes to those links. So here’s the finished hCard. <div class="bio vcard"> <h3>About the author</h3> <p><span class="fn">Drew McLellan</span> is a web developer, author and no-good swindler from just outside London, England. At the <a class="org" href="http://www.webstandards.org/">Web Standards Project</a> he works on press, strategy and tools. Drew keeps a <a class="url" href="http://www.allinthehead.com/">personal weblog</a> covering web development issues and themes.</p> </div> OK, that was easy. By just applying a few easy class names to the HTML I was already publishing, I’ve implemented an hCard that right now anyone with Greasemonkey can click to add to their address book, that Google and Yahoo! and whoever else can index and work out important things like which websites are associated with my name if they so choose (and boy, will they so choose), and in the future who knows what. In terms of effort, practically nil. Where next? So that was a trivial example, but to be honest it doesn’t really get much more complex even with the most pernickety permutations. Because hCard is based on vCard (a mature and well thought-out standard), it’s all tried and tested. Here’s some good next steps. Play with the hCard Creator Take a deep breath and read the spec Start implementing hCard as you go on your own projects – it takes very little time hCard is just one of an ever-increasing number of microformats. If this tickled your fancy, I suggest subscribing to the microformats site in your RSS reader to keep in touch with new developments. What’s the take-away? The take-away is this. They may sound like just more Web 2-point-HoHoHo hype, but microformats are a well thought-out, and easy to implement way of adding greater depth to the information you publish online. They have some nice benefits right away – certainly at geek-level – but in the longer term they become much more significant. We’ve been at this long enough to know that the web has a long, long memory and that what you publish today will likely be around for years. But putting the extra depth of meaning into your documents now you can help guard that they’ll continue to be useful in the future, and not just a bunch of flat ASCII. 2005 Drew McLellan drewmclellan 2005-12-06T00:00:00+00:00 https://24ways.org/2005/practical-microformats-with-hcard/ code
134 Photographic Palettes How many times have you seen a colour combination that just worked, a match so perfect that it just seems obvious? Now, how many times do you come up with those in your own work? A perfect palette looks easy when it’s done right, but it’s often maddeningly difficult and time-consuming to accomplish. Choosing effective colour schemes will always be more art than science, but there are things you can do that will make coming up with that oh-so-smooth palette just a little a bit easier. A simple trick that can lead to incredibly gratifying results lies in finding a strong photograph and sampling out particularly harmonious colours. Photo Selection Not all photos are created equal. You certainly want to start with imagery that fits the eventual tone you’re attempting to create. A well-lit photo of flowers might lead to a poor colour scheme for a funeral parlour’s web site, for example. It’s worth thinking about what you’re trying to say in advance, and finding a photo that lends itself to your message. As a general rule of thumb, photos that have a lot of neutral or de-saturated tones with one or two strong colours make for the best palette; bright and multi-coloured photos are harder to derive pleasing results from. Let’s start with a relatively neutral image. Sampling In the above example, I’ve surrounded the photo with three different background colours directly sampled from the photo itself. Moving from left to right, you can see how each of the sampled colours is from an area of increasingly smaller coverage within the photo, and yet there’s still a strong harmony between the photo and the background image. I don’t really need to pick the big obvious colours from the photo to create that match, I can easily concentrate on more interesting colours that might work better for what I intend. Using a similar palette, let’s apply those colour choices to a more interesting layout: In this mini-layout, I’ve re-used the same tan colour from the previous middle image as a background, and sampled out a nicely matching colour for the top and bottom overlays, as well as the two different text colours. Because these colours all fall within a narrow range, the overall balance is harmonious. What if I want to try something a little more daring? I have a photo of stacked chairs of all different colours, and I’d like to use a few more of those. No problem, provided I watch my colour contrast: Though it uses varying shades of red, green, and yellow, this palette actually works because the values are even, and the colours muted. Placing red on top of green is usually a hideous combination of death, but if the green is drab enough and the red contrasts well enough, the result can actually be quite pleasing. I’ve chosen red as my loudest colour in this palette, and left green and yellow to play the quiet supporting roles. Obviously, there are no hard and fast rules here. You might not want to sample absolutely every colour in your scheme from a photo. There are times where you’ll need a variation that’s just a little bit lighter, or a blue that’s not in the photo. You might decide to start from a photo base and tweak, or add in colours of your own. That’s okay too. Tonal Variations I’ll leave you with a final trick I’ve been using lately, a way to bring a bit more of a formula into the equation and save some steps. Starting with the same base palette I sampled from the chairs, in the above image I’ve added a pair of overlaying squares that produce tonal variations of each primary. The lighter variation is simply a solid white square set to 40% opacity, the darker one is a black square at 20%. That gives me a highlight and shadow for each colour, which would prove handy if I had to adapt this colour scheme to a larger layout. I could add a few more squares of varying opacities, or adjust the layer blending modes for different effects, but as this looks like a great place to end, I’ll leave that up to your experimental whims. Happy colouring! 2006 Dave Shea daveshea 2006-12-22T00:00:00+00:00 https://24ways.org/2006/photographic-palettes/ design
166 Performance On A Shoe String Back in the summer, I happened to notice the official Wimbledon All England Tennis Club site had jumped to the top of Alexa’s Movers & Shakers list — a list that tracks sites that have had the biggest upturn or downturn in traffic. The lawn tennis championships were underway, and so traffic had leapt from almost nothing to crazy-busy in a no time at all. Many sites have similar peaks in traffic, especially when they’re based around scheduled events. No one cares about the site for most of the year, and then all of a sudden – wham! – things start getting warm in the data centre. Whilst the thought of chestnuts roasting on an open server has a certain appeal, it’s less attractive if you care about your site being available to visitors. Take a look at this Alexa traffic graph showing traffic patterns for superbowl.com at the beginning of each year, and wimbledon.org in the month of July. Traffic graph from Alexa.com Whilst not on the same scale or with such dramatic peaks, we have a similar pattern of traffic here at 24ways.org. Over the last three years we’ve seen a dramatic pick up in traffic over the month of December (as would be expected) and then a much lower, although steady load throughout the year. What we do have, however, is the luxury of knowing when the peaks will be. For a normal site, be that a blog, small scale web app, or even a small corporate site, you often just cannot predict when you might get slashdotted, end up on the front page of Digg or linked to from a similarly high-profile site. You just don’t know when the peaks will be. If you’re a big commercial enterprise like the Super Bowl, scaling up for that traffic is simply a cost of doing business. But for most of us, we can’t afford to have massive capacity sat there unused for 90% of the year. What you have to do instead is work out how to deal with as much traffic as possible with the modest resources you have. In this article I’m going to talk about some of the things we’ve learned about keeping 24 ways running throughout December, whilst not spending a fortune on hosting we don’t need for 11 months of each year. We’ve not always got it right, but we’ve learned a lot along the way. The Problem To know how to deal with high traffic, you need to have a basic idea of what happens when a request comes into a web server. 24 ways is hosted on a single small virtual dedicated server with a great little hosting company in the UK. We run Apache with PHP and MySQL all on that one server. When a request comes in a new Apache process is started to deal with the request (or assigned if there’s one available not doing anything). Each process takes a bunch of memory, so there’s a finite number of processes that you can run, and therefore a finite number of pages you can serve at once before your server runs out of memory. With our budget based on whatever is left over after beer, we need to get best performance we can out of the resources available. As the goal is to serve as many pages as quickly as possible, there are several approaches we can take: Reducing the amount of memory needed by each Apache process Reducing the amount of time each process is needed Reducing the number of requests made to the server Yahoo! have published some information on what they call Exceptional Performance, which is well worth reading, and compliments many of my examples here. The Yahoo! guidelines very much look at things from a user perspective, which is always important. Server tweaking If you’re in the position of being able to change your server configuration (our set-up gives us root access to what is effectively a virtual machine) there are some basic steps you can take to maximise the available memory and reduce the memory footprint. Without getting too boring and technical (whole books have been written on this) there are a couple of things to watch out for. Firstly, check what processes you have running that you might not need. Every megabyte of memory that you free up might equate to several thousand extra requests being served each day, so take a look at top and see what’s using up your resources. Quite often a machine configured as a web server will have some kind of mail server running by default. If your site doesn’t use mail (ours doesn’t) make sure it’s shut down and not using resources. Secondly, have a look at your Apache configuration and particularly what modules are loaded. The method for doing this varies between versions of Apache, but again, every module loaded increases the amount of memory that each Apache process requires and therefore limits the number of simultaneous requests you can deal with. The final thing to check is that Apache isn’t configured to start more servers than you have memory for. This is usually done by setting the MaxClients directive. When that limit is reached, your site is going to stop responding to further requests. However, if all else goes well that threshold won’t be reached, and if it does it will at least stop the weight of the traffic taking the entire server down to a point where you can’t even log in to sort it out. Those are the main tidbits I’ve found useful for this site, although it’s worth repeating that entire books have been written on this subject alone. Caching Although the site is generated with PHP and MySQL, the majority of pages served don’t come from the database. The process of compiling a page on-the-fly involves quite a few trips to the database for content, templates, configuration settings and so on, and so can be slow and require a lot of CPU. Unless a new article or comment is published, the site doesn’t actually change between requests and so it makes sense to generate each page once, save it to a file and then just serve all following requests from that file. We use QuickCache (or rather a plugin based on it) for this. The plugin integrates with our publishing system (Textpattern) to make sure the cache is cleared when something on the site changes. A similar plugin called WP-Cache is available for WordPress, but of course this could be done any number of ways, and with any back-end technology. The important principal here is to reduce the time it takes to serve a page by compiling the page once and serving that cached result to subsequent visitors. Keep away from your database if you can. Outsource your feeds We get around 36,000 requests for our feed each day. That really only works out at about 7,000 subscribers averaging five-and-a-bit requests a day, but it’s still 36,000 requests we could easily do without. Each request uses resources and particularly during December, all those requests can add up. The simple solution here was to switch our feed over to using FeedBurner. We publish the address of the FeedBurner version of our feed here, so those 36,000 requests a day hit FeedBurner’s servers rather than ours. In addition, we get pretty graphs showing how the subscriber-base is building. Off-load big files Larger files like images or downloads pose a problem not in bandwidth, but in the time it takes them to transfer. A typical page request is very quick, a few seconds at the most, resulting in the connection being freed up promptly. Anything that keeps a connection open for a long time is going to start killing performance very quickly. This year, we started serving most of the images for articles from a subdomain – media.24ways.org. Rather than pointing to our own server, this subdomain points to an Amazon S3 account where the files are held. It’s easy to pigeon-hole S3 as merely an online backup solution, and whilst not a fully fledged CDN, S3 works very nicely for serving larger media files. The roughly 20GB of files served this month have cost around $5 in Amazon S3 charges. That’s so affordable it may not be worth even taking the files back off S3 once December has passed. I found this article on Scalable Media Hosting with Amazon S3 to be really useful in getting started. I upload the files via a Firefox plugin (mentioned in the article) and then edit the ACL to allow public access to the files. The way S3 enables you to point DNS directly at it means that you’re not tied to always using the service, and that it can be transparent to your users. If your site uses photographs, consider uploading them to a service like Flickr and serving them directly from there. Many photo sharing sites are happy for you to link to images in this way, but do check the acceptable use policies in case you need to provide a credit or link back. Off-load small files You’ll have noticed the pattern by now – get rid of as much traffic as possible. When an article has a lot of comments and each of those comments has an avatar along with it, a great many requests are needed to fetch each of those images. In 2006 we started using Gravatar for avatars, but their servers were slow and were holding up page loads. To get around this we started caching the images on our server, but along with that came the burden of furnishing all the image requests. Earlier this year Gravatar changed hands and is now run by the same team behind WordPress.com. Those guys clearly know what they’re doing when it comes to high performance, so this year we went back to serving avatars directly from them. If your site uses avatars, it really makes sense to use a service like Gravatar where your users probably already have an account, and where the image requests are going to be dealt with for you. Know what you’re paying for The server account we use for 24 ways was opened in November 2005. When we first hit the front page of Digg in December of that year, we upgraded the server with a bit more memory, but other than that we were still running on that 2005 spec for two years. Of course, the world of technology has moved on in those years, prices have dropped and specs have improved. For the same amount we were paying for that 2005 spec server, we could have an account with twice the CPU, memory and disk space. So in November of this year I took out a new account and transferred the site from the old server to the new. In that single step we were prepared for dealing with twice the amount of traffic, and because of a special offer at the time I didn’t even have to pay the setup cost on the new server. So it really pays to know what you’re paying for and keep an eye out of ways you can make improvements without needing to spend more money. Further steps There’s nearly always more that can be done. For example, there are some media files (particularly for older articles) that are not on S3. We also serve our CSS directly and it’s not minified or compressed. But by tackling the big problems first we’ve managed to reduce load on the server and at the same time make sure that the load being placed on the server can be dealt with in the most frugal way. Over the last 24 days we’ve served up articles to more than 350,000 visitors without breaking a sweat. On a busy day, that’s been nearly 20,000 visitors in just 24 hours. While in the grand scheme of things that’s not a huge amount of traffic, it can be a lot if you’re not prepared for it. However, with a little planning for the peaks you can help ensure that when the traffic arrives you’re ready to capitalise on it. Of course, people only visit 24 ways for the wealth of knowledge and experience that’s tied up in the articles here. Therefore I’d like to take the opportunity to thank all our authors this year who have given their time as a gift to the community, and to wish you all a very happy Christmas. 2007 Drew McLellan drewmclellan 2007-12-24T00:00:00+00:00 https://24ways.org/2007/performance-on-a-shoe-string/ ux
232 Optimize Your Web Design Workflow I’m not sure about you, but I still favour using Photoshop to create my designs for the web. I agree that this application, even with its never-ending feature set, is not the perfect environment to design websites in. The ideal application doesn’t exist yet, however, so until it does it’s maybe not such a bad idea to investigate ways to optimize our workflow. Why use Photoshop? It will probably not come as a surprise if I say that Photoshop and Illustrator are the applications that I know best and feel most comfortable and creative in. Some people prefer Fireworks for web design. Even though I understand people’s motivations, I still prefer Photoshop personally. On the occasions that I gave Fireworks a try, I ended up just using the application to export my images as slices, or to prepare a dummy for the client. For some reason, I’ve never been able to find my way in that app. There were always certain things missing that could only be done in either Photoshop or Illustrator, which bothered me. Why not start in the browser? These days, with CSS3 styling emerging, there are people who find it more efficient to design in the browser. I agree that at a certain point, once the basic design is all set and defined, you can jump right into the code and go from there. But the actual creative part, at least for me, needs to be done in an application such as Photoshop. As a designer I need to be able to create and experiment with shapes on the fly, draw things, move them around, change colours, gradients, effects, and so on. I can’t see me doing this with code. I’m sure if I switch to markup too quickly, I might end up with a rather boxy and less interesting design. Once I start playing with markup, I leave my typical ‘design zone’. My brain starts thinking differently – more rational and practical, if you know what I mean; I start to structure and analyse how to mark up my design in the most efficient semantic way. When I design, I tend to let that go for a bit. I think more freely and not so much about the limitations, as it might hinder my creativity. Now that you know my motivations to stick with Photoshop for the time being, let’s see how we can optimize this beast. Optimize your Photoshop workspace In Photoshop CS5 you have a few default workspace options to choose from which can be found at the top right in the Application Bar (Window > Application Bar). You can set up your panels and palettes the way you want, starting from the ‘Design’ workspace option, and save this workspace for future web work. Here is how I have set up things for when I work on a website design: I have the layers palette open, and I keep the other palettes collapsed. Sometimes, when space permits, I open them all. For designers who work both on print and web, I think it’s worthwhile to save a workspace for both, or for when you’re doing photo retouching. Set up a grid When you work a lot with Shape Layers like I do, it’s really helpful to enable the Grid (View > Show > Grid) in combination with Snap to Grid (View > Snap To > Grid). This way, your vector-based work will be pixel-sharp, as it will always snap to the grid, and so you don’t end up with blurry borders. To set up your preferred grid, go to Preferences > Guides, Grids and Slices. A good setting is to use ‘Gridline Every 10 pixels’ and ‘Subdivision 10’. You can switch it on and off at any time using the shortcut Cmd/Ctrl + ’. It might also help to turn on Smart Guides (View > Show > Smart Guides). Another important tip for making sure your Shape Layer boxes and other shapes are perfectly aligned to the pixel grid when you draw them is to enable Snap to Pixels. This option can be enabled in the Application bar in the Geometry options dropdown menu when you select one of the shape tools from the toolbox. Use Shape Layers To keep your design as flexible as possible, it’s a good thing to use Shape Layers wherever you can as they are scalable. I use them when I design for the iPhone. All my icons, buttons, backgrounds, illustrative graphics – they are all either Smart Objects placed from Illustrator, or Shape Layers. This way, the design is scalable for the retina display. Use Smart Objects Among the things I like a lot in Photoshop are Smart Objects. Smart Objects preserve an image’s source content with all its original characteristics, enabling you to perform non-destructive editing to the layer. For me, this is the ideal way of making my design flexible. For example, a lot of elements are created in Illustrator and are purely vector-based. Placing these elements in Photoshop as Smart Objects (via copy and paste, or dragging from Illustrator into Photoshop) will keep them vector-based and scalable at all times without loss of quality. Another way you could use Smart Objects is whenever you have repeating elements; for example, if you have a stream or list of repeating items. You could, for instance, create one, two or three different items (for the sake of randomness), make each one a Smart Object, and repeat them to create the list. Then, when you have to update, you need only change the Smart Object, and the update will be automatically applied in all its linked instances. Turning photos into Smart Objects before you resize them is also worth considering – you never know when you’ll need that same photo just a bit bigger. It keeps things more flexible, as you leave room to resize the image at a later stage. I use this in combination with the Smart Filters a lot, as it gives me such great flexibility. I usually use Smart Objects as well for the main sections of a web page, which are repeated across different pages of a site. So, for elements such as the header, footer and sidebar, it can be handy for bigger projects that are constantly evolving, where you have to create a lot of different pages in Photoshop. You could save a template page that has the main sections set up as Smart Objects, always in their latest version. Each time you need to create new page, you can start from that template file. If you need to update an existing page because the footer (or sidebar, or header) has been updated, you can drag the updated Smart Object into this page. Although, do I wish Photoshop made it possible to have Smart Objects live as separate files, which are then linked to my different pages. Then, whenever I update the Smart Object, the pages are automatically updated next time I open the file. This is how linked files work in InDesign and Illustrator when you place a external image. Use Layer Comps In some situations, using Layer Comps can come in handy. I try to use them when the design consists of different states; for example, if there are hidden and show states of certain content, such as when content is shown after clicking a certain button. It can be useful to create a Layer Comp for each state. So, when you switch between the two Layer Comps, you’re switching between the two states. It’s OK to move or hide content in each of these states, as well as apply different layer styles. I find this particularly useful when I need to save separate JPEG versions of each state to show to the client, instead of going over all the eye icons in the layers palette to turn the layers’ visibility on or off. Create a set of custom colour swatches I tend to use a distinct colour Swatches palette for each project I work on, by saving a separate Swatches palette in project’s folder (as an .ase file). You can do this through the palette’s dropdown menu, choosing Save Swatches for Exchange. Selecting this option gives you the flexibility to load this palette in other Adobe applications like Illustrator, InDesign or Fireworks. This way, you have the colours of any particular project at hand. I name each colour, using the hexadecimal values. Loading, saving or changing the view of the Swatches palette can be done via the palette’s dropdown menu. My preferred view is ‘Small List’ so I can see the hexadecimal values or other info I have added in the description. I do wish Photoshop had the option of loading several different Styles palettes, so I could have two or more of them open at the same time, but each as a separate palette. This would be handy whenever I switch to another project, as I’m usually working on more than one project in a day. At the moment, you can only add a set of colours to the palette that is already open, which is frustrating and inefficient if you need to update the palette of a project separately. Create a set of custom Styles Just like saving a Swatches palette, I also always save the styles I apply in the Styles palette as a separate Styles file in the project’s folder when I work on a website design or design for iPhone/iPad. During the design process, I can save it each time styles are added. Again, though, it would be great if we could have different Styles palettes open at the same time. Use a scratch file What I also find particularly timesaving, when working on a large project, is using some kind of scratch file. By that, I mean a file that has elements in place that you reuse a lot in the general design. Think of buttons, icons and so on, that you need in every page or screen design. This is great for both web design work and iPad/iPhone work. Use the slice tool This might not be something you think of at first, because you probably associate this way of working with ‘old-school’ table-based techniques. Still, you can apply your slice any way you want, keeping your way of working in mind. Just think about it for a second. If you use the slice tool, and you give each slice its proper filename, you don’t have to worry about it when you need to do updates on the slice or image. Photoshop will remember what the image of that slice is called and which ‘Save for Web’ export settings you’ve used for it. You can also export multiple slices all at once, or export only the ones you need using ‘Save selected slices’. I hope this list of optimization tips was useful, and that they will help you improve and enjoy your time in Photoshop. That is, until the ultimate web design application makes its appearance. Somebody is building this as we speak, right? 2010 Veerle Pieters veerlepieters 2010-12-10T00:00:00+00:00 https://24ways.org/2010/optimize-your-web-design-workflow/ process
281 Nine Things I've Learned I’ve been a professional graphic designer for fourteen years and for just under four of those a professional web designer. Like most designers I’ve learned a lot in my time, both from a design point of view and in business as freelance designer. A few of the things I’ve learned stick out in my mind, so I thought I’d share them with you. They’re pretty random and in no particular order. 1. Becoming the designer you want to be When I started out as a young graphic designer, I wanted to design posters and record sleeves, pretty much like every other young graphic designer. The problem is that the reality of the world means that when you get your first job you’re designing the back of a paracetamol packet or something equally weird. I recently saw a tweet that went something like this: “You’ll never become the designer you always dreamt of being by doing the work you never wanted to do”. This is so true; to become the designer you want to be, you need to be designing the things you’re passionate about designing. This probably this means working in the evenings and weekends for little or no money, but it’s time well spent. Doing this will build up your portfolio with the work that really shows what you can do! Soon, someone will ask you to design something based on having seen this work. From this point, you’re carving your own path in the direction of becoming the designer you always wanted to be. 2. Compete on your own terms As well as all being friends, we are also competitors. In order to win new work we need a selling point, preferably a unique selling point. Web design is a combination of design disciplines – user experience design, user interface Design, visual design, development, and so on. Some companies will sell themselves as UX specialists, which is fine, but everyone who designs a website from scratch does some sort of UX, so it’s not really a unique selling point. Of course, some people do it better than others. One area of web design that clients have a strong opinion on, and will judge you by, is visual design. It’s an area in which it’s definitely possible to have a unique selling point. Designing the visual aesthetic for a website is a combination of logical decision making and a certain amount of personal style. If you can create a unique visual style to your work, it can become a selling point that’s unique to you. 3. How much to charge and staying motivated When you’re a freelance designer one of the hardest things to do is put a price on your work and skills. Finding the right amount to charge is a fine balance between supplying value to your customer and also charging enough to stay motivated to do a great job. It’s always tempting to offer a low price to win work, but it’s often not the best approach: not just for yourself but for the client as well. A client once asked me if I could reduce my fee by £1,000 and still be motivated enough to do a good job. In this case the answer was yes, but it was the question that resonated with me. I realized I could use this as a gauge to help me price projects. Before I send out a quote I now always ask myself the question “Is the amount I’ve quoted enough to make me feel motivated to do my best on this project?” I never send out a quote unless the answer is yes. In my mind there’s no point in doing any project half-heartedly, as every project is an opportunity to build your reputation and expand your portfolio to show potential clients what you can do. Offering a client a good price but not being prepared to put everything you have into it, isn’t value for money. 4. Supplying the right design When I started out as a graphic designer it seemed to be the done thing to supply clients with a ton of options for their logo or brochure designs. In a talk given by Dan Rubin, he mentioned that this was a legacy of agencies competing with each other in a bid to create the illusion of offering more value for money. Over the years, I’ve realized that offering more than one solution makes no sense. The reason a client comes to you as a designer is because you’re the person than can get it right. If I were to supply three options, I’d be knowingly offering my client at least two options that I didn’t think worked. To this day I still get asked how many homepage design options I’ll supply for the quoted amount. The answer is one. Of course, I’m more than happy to iterate upon the design to fine-tune it and, on the odd occasion, I do revisit a design concept if I just didn’t nail the design first time around. Your time is much better spent refining the right design option than rushing out three substandard designs in the same amount of time. 5. Colour is key There are many contributing factors that go into making a good visual design, but one of the simplest ways to do this is through the use of colour. The colour palette used in a design can have such a profound effect on a visual design that it almost feels like you’re cheating. It’s easy to add more and more subtle shades of colour to add a sense of sophistication and complexity to a design, but it dilutes the overall visual impact. When I design, I almost have a rule that only allows me to use a very limited colour palette. I don’t always stick to it, but it’s always in mind and something I’m constantly reviewing through my design process. 6. Creative thinking is central to good or boundary-pushing web design When we think of creativity in web design we often link this to the visual design, as there is an obvious opportunity to be creative in this area if the brief allows it. Something that I’ve learnt in my time as a web designer is that there’s a massive need for creative thinking in the more technical aspects of web design. The tools we use for building websites are there to be manipulated and used in creative ways to design exciting and engaging user experiences. Great developers are constantly using their creativity to push the boundaries of what can be done with CSS, jQuery and JavaScript. Being creative and creative thinking are things we should embrace as an industry and they are qualities that can be found in anyone, whether they be a visual designer or Rails developer. 7. Creative block: don’t be afraid to get things wrong Creative block can be a killer when designing. It’s often applied to visual design, which is more subjective. I suffer from creative block on a regular basis. It’s hugely frustrating and can screw up your schedule. Having thought about what creative block actually is, I’ve come to the conclusion that it’s actually more of a lack of direction than a lack of ideas. You have ideas and solutions in mind but don’t feel committed to any of them. You’re scared that whatever direction you take, it’ll turn out to be wrong. I’ve found that the best remedy for this is to work through this barrier. It’s a bit like designing with a blindfold on – you don’t really know where you’re going. If you stick to your guns and keep pressing forward I find that, nine times out of ten, this process leads to a solution. As the page begins to fill, the direction you’re looking for slowly begins to take shape. 8. You get better at designing by designing I often get emails asking me what books someone can read to help them become a better designer. There are a lot of good books on subjects like HTML5, CSS, responsive web design and the like, that will really help improve anyone’s web design skills. But, when it comes to visual design, the best way to get better is to design as much as possible. You can’t follow instructions for these things because design isn’t following instructions. A large part of web design is definitely applying a set of widely held conventions, but there’s another part to it that is invention and the only way to get better at this is to do it as much as possible. 9. Self-belief is overrated Throughout our lives we’re told to have self-belief. Self-belief and confidence in what we do, whatever that may be. The problem is that some people find it easier than others to believe in themselves. I’ve spent years trying to convince myself to believe in what I do but have always found it difficult to have complete confidence in my design skills. Self-doubt always creeps in. I’ve realized that it’s ok to doubt myself and I think it might even be a good thing! I’ve realized that it’s my self-doubt that propels me forward and makes me work harder to achieve the best results. The reason I’m sharing this is because I know I’m not the only designer that feels this way. You can spend a lot of time fighting self-doubt only to discover that it’s your body’s natural mechanism to help you do the best job possible. 2011 Mike Kus mikekus 2011-12-11T00:00:00+00:00 https://24ways.org/2011/nine-things-ive-learned/ business
294 New Tricks for an Old Dog Much of my year has been spent helping new team members find their way around the expansive and complex codebase that is the TweetDeck front-end, trying to build a happy and productive group of people around a substantial codebase with many layers of legacy. I’ve loved doing this. Everything from writing new documentation, drawing diagrams, and holding technical architecture sessions teaches you something you didn’t know or exposes an area of uncertainty that you can go work on. In this article, I hope to share some experiences and techniques that will prove useful in your own situation and that you can impress your friends in some new and exciting ways! How do you do, fellow kids? To start with I’d like to introduce you to our JavaScript framework, Flight. Right now it’s used by twitter.com and TweetDeck although, as a company, Twitter is largely moving to React. Over time, as we used Flight for more complex interfaces, we found it wasn’t scaling with us. Composing components into trees was fiddly and often only applied for a specific parent-child pairing. It seems like an obvious feature with hindsight, but it didn’t come built-in to Flight, and it made reusing components a real challenge. There was no standard way to manage the state of a component; they all did it slightly differently, and the technique often varied by who was writing the code. This cost us in maintainability as you just couldn’t predict how a component would be built until you opened it. Making matters worse, Flight relied on events to move data around the application. Unfortunately, events aren’t good for giving structure to complex logic. They jump around in a way that’s hard to understand and debug, and force you to search your code for a specific string — the event name‚ to figure out what’s going on. To find fixes for these problems, we looked around at other frameworks. We like React for it’s simple, predictable state management and reactive re-render flow, and Elm for bringing strict functional programming to everyone. But when you have lots of existing code, rewriting or switching framework is a painful and expensive option. You have to understand how it will interact with your existing code, how you’ll test it alongside existing code, and how it will affect the size and performance of the application. This all takes time and effort! Instead of planning a rewrite, we looked for the ideas hidden within other frameworks that we could reapply in our own situation or bring to the tools we already were using. Boiled down, what we liked seemed quite simple: Component nesting & composition Easy, predictable state management Normal functions for data manipulation Making these ideas applicable to Flight took some time, but we’re in a much better place now. Through persistent trial-and-error, we have well documented, testable and standard techniques for creating complex component hierarchies, updating and reacting to state changes, and passing data around the app. While the specifics of our situation and Flight aren’t really important, this experience taught me something: Distill good tech into great ideas. You can apply great ideas anywhere. You don’t have to use cool kids’ latest framework, hottest build tool or fashionable language to benefit from them. If you can identify a nugget of gold at the heart of it all, why not use it to improve what you have already? Times, they are a changin’ Apart from stealing ideas from the new and shiny, how can we keep make the most of improved tooling and techniques? Times change and so should the way we write code. Going back in time a bit, TweetDeck used some slightly outmoded tools for building and bundling. Without a transpiler like Babel we were missing out new language features, and without a more advanced build tools like Webpack, every module’s source was encased in AMD boilerplate. In fact, we found ourselves with a mix of both AMD syntaxes: define(["lodash"], function (_) { // . . . }); define(function (require) { var _ = require("lodash"); // . . . }); This just wouldn’t do. And besides, what we really wanted was CommonJS, or even ES2015 module syntax: import _ from "lodash"; These days we’re using Babel, Webpack, ES2015 modules and many new language features that make development just… better. But how did we get there? To explain, I want to introduce you to codemods and jscodeshift. A codemod is a large-scale refactor of a whole codebase, often mechanical or repetitive. Think of renaming a module or changing an API like URL("...") to new URL("..."). jscodeshift is a toolkit for running automated codemods, where you express a code transformation using code. The automated codemod operates on each file’s syntax tree – a data-structure representation of the code — finding and modifying in place as it goes. Here’s an example that renames all instances of the variable foo to bar: module.exports = function (fileInfo, api) { return api .jscodeshift(fileInfo.source) .findVariableDeclarators('foo') .renameTo('bar') .toSource(); }; It’s a seriously powerful tool, and we’ve used it to write a series of codemods that: rename modules, unify our use of AMD to a single syntax, transition from one testing framework to another, and switch from AMD to CommonJS. These changes can be pretty huge and far-reaching. Here’s an example commit from when we switched to CommonJS: commit 8f75de8fd4c702115c7bf58febba1afa96ae52fc Date: Tue Jul 12 2016 Run AMD -> CommonJS codemod 418 files changed, 47550 insertions(+), 48468 deletions(-) Yep, that’s just under 50k lines changed, tested, merged and deployed without any trouble. AMD be gone! From this step-by-step approach, using codemods to incrementally tweak and improve, we extracted a little codemod recipe for making significant, multi-stage changes: Find all the existing patterns Choose the two most similar Unify with a codemod Repeat. For example: For module loading, we had 2 competing AMD patterns plus some use of CommonJS The two AMD syntaxes were the most similar We used a codemod to move to unify the AMD patterns Later we returned to AMD to convert it to CommonJS It’s worked for us, and if you’d like to know more about codemods then check out Evolving Complex Systems Incrementally by Facebook engineer, Christoph Pojer. Welcome aboard! As TweetDeck has gotten older and larger, the amount of things a new engineer has to learn about has exploded. The myriad of microservices that manage our data and their layers of authentication, security and business logic around them make for an overwhelming amount of information to hand to a newbie. Inspired by Amy’s amazing Guide to the Care and Feeding of Junior Devs, we realised it was important to take time to design our onboarding that each of our new hires go through to make the most of their first few weeks. Joining a new company, team, or both, is stressful and uncomfortable. Everything you can do to help a new hire will be valuable to them. So please, take time to design your onboarding! And as you build up an onboarding process, you’ll create things that are useful for more than just new hires; it’ll force you to write documentation, for example, in a way that’s understandable for people who are unfamiliar with your team, product and codebase. This can lead to more outside contributions: potential contributors feel more comfortable getting set up on your product without asking for help. This is something that’s taken for granted in open source, but somehow I think we forget about it in big companies. After all, better documentation is just a good thing. You will forget things from time to time, and you’d be surprised how often the “beginner” docs help! For TweetDeck, we put together system and architecture diagrams, and one-pager explanations of important concepts: What are our dependencies? Where are the potential points of failure? Where does authentication live? Storage? Caching? Who owns “X”? Of course, learning continues long after onboarding. The landscape is constantly shifting; old services are deprecated, new APIs appear and what once true can suddenly be very wrong. Keeping up with this is a serious challenge, and more than any one person can track. To address this, we’ve thought hard about our knowledge sharing practices across the whole team. For example, we completely changed the way we do code review. In my opinion, code review is the single most effective practice you can introduce to share knowledge around, and build the quality and consistency of your team’s work. But, if you’re not doing it, here’s my suggestion for getting started: Every pull request gets a +1 from someone else. That’s all — it’s very light-weight and easy. Just ask someone to have a quick look over your code before it goes into master. At Twitter, every commit gets a code review. We do a lot of reviewing, so small efficiency and effectiveness improvements make a big difference. Over time we learned some things: Don’t review for more than hour 1 Keep reviews smaller than ~400 lines 2 Code review your own code first 2 After an hour, and above roughly 400 lines, your ability to detect issues in a code review starts to decrease. So review little and often. The gaps around lunch, standup and before you head home are ideal. And remember, if someone’s put code up for a review, that review is blocking them doing other work. It’s your job to unblock them. On TweetDeck, we actually try to keep reviews under 250 lines. It doesn’t sound like much, but this constraint applies pressure to make smaller, incremental changes. This makes breakages easier to detect and roll back, and leads to a very natural feature development process that encourages learning and iteration. But the most important thing I’ve learned personally is that reviewing my own code is the best way to spot issues. I try to approach my own reviews the way I approach my team’s: with fresh, critical eyes, after a break, using a dedicated code review tool. It’s amazing what you can spot when you put a new in a new interface around code you’ve been staring at for hours! And yes, this list features science. The data backs up these conclusions, and if you’d like to learn more about scientific approaches to software engineering then I recommend you buy Making Software: What Really Works, and Why We Believe It. It’s ace. For more dedicated information sharing, we’ve introduced regular seminars for everyone who works on a specific area or technology. It works like this: a team-member shares or teaches something to everyone else, and next time it’s someone else’s turn. Giving everyone a chance to speak, and encouraging a wide range of topics, is starting to produce great results. If you’d like to run a seminar, one thing you could try to get started: run a point at the thing you least understand in our architecture session — thanks to James for this idea. And guess what… your onboarding architecture diagrams will help (and benefit from) this! More, please! There’s a few ideas here to get you started, but there are even more in a talk I gave this year called Frontend Archaeology, including a look at optimising for confidence with front-end operations. And finally, thanks to Amy for proof reading this and to Passy for feedback on the original talk. Dunsmore et al. 2000. Object-Oriented Inspection in the Face of Delocalisation. Beverly, MA: SmartBear Software. ↩ Cohen, Jason. 2006. Best Kept Secrets of Peer Code Review. Proceedings of the 22nd ICSE 2000: 467-476. ↩ ↩ 2016 Tom Ashworth tomashworth 2016-12-18T00:00:00+00:00 https://24ways.org/2016/new-tricks-for-an-old-dog/ code
335 Naughty or Nice? CSS Background Images Web Standards based development involves many things – using semantically sound HTML to provide structure to our documents or web applications, using CSS for presentation and layout, using JavaScript responsibly, and of course, ensuring that all that we do is accessible and interoperable to as many people and user agents as we can. This we understand to be good. And it is good. Except when we don’t clearly think through the full implications of using those techniques. Which often happens when time is short and we need to get things done. Here are some naughty examples of CSS background images with their nicer, more accessible counterparts. Transaction related messages I’m as guilty of this as others (or, perhaps, I’m the only one that has done this, in which case this can serve as my holiday season confessional) We use lovely little icons to show status messages for a transaction to indicate if the action was successful, or was there a warning or error? For example: “Your postal/zip code was not in the correct format.” Notice that we place a nice little icon there, and use background colours and borders to convey a specific message: there was a problem that needs to be fixed. Notice that all of this visual information is now contained in the CSS rules for that div: <div class="error"> <p>Your postal/zip code was not in the correct format.</p> </div> div.error { background: #ffcccc url(../images/error_small.png) no-repeat 5px 4px; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; } Using this approach also makes it very easy to create a div.success and div.warning CSS rules meaning we have less to change in our HTML. Nice, right? No. Naughty. Visual design communicates The CSS is being used to convey very specific information. The choice of icon, the choice of background colour and borders tell us visually that there is something wrong. With the icon as a background image – there is no way to specify any alt text for the icon, and significant meaning is lost. A screen reader user, for example, misses the fact that it is an “error.” The solution? Ask yourself: what is the bare minimum needed to indicate there was an error? Currently in the absence of CSS there will be no icon – which (I’m hoping you agree) is critical to communicating there was an error. The icon should be considered content and not simply presentational. The borders and background colour are certainly much less critical – they belong in the CSS. Lets change the code to place the image directly in the HTML and using appropriate alt text to better communicate the meaning of the icon to all users: <div class="bettererror"> <img src="images/error_small.png" alt="Error" /> <p>Your postal/zip code was not in the correct format.</p> </div> div.bettererror { background-color: #ffcccc; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; position: relative; min-height: 1.25em; } div.bettererror img { display: block; position: absolute; left: 0.25em; top: 0.25em; padding: 0; margin: 0; } div.bettererror p { position: absolute; left: 2.5em; padding: 0; margin: 0; } Compare these two examples of transactional messages Status of a Record This example is pretty straightforward. Consider the following: a real estate listing on a web site. There are three “states” for a listing: new, normal, and sold. Here’s how they look: Example of a New Listing Example of A Sold Listing If we (forgive the pun) blindly apply the “use a CSS background image” technique we clearly run into problems with the new and sold images – they actually contain content with no way to specify an alternative when placed in the CSS. In this case of the “new” image, we can use the same strategy as we used in the first example (the transaction result). The “new” image should be considered content and is placed in the HTML as part of the <h2>...</h2> that identifies the listing. However when considering the “sold” listing, there are less changes to be made to keep the same look by leaving the “SOLD” image as a background image and providing the equivalent information elsewhere in the listing – namely, right in the heading. For those that can’t see the background image, the status is communicated clearly and right away. A screen reader user that is navigating by heading or viewing a listing will know right away that a particular property is sold. Of note here is that in both cases (new and sold) placing the status near the beginning of the record helps with a zoom layout as well. Better Example of A Sold Listing Summary Remember: in the holiday season, its what you give that counts!! Using CSS background images is easy and saves time for you but think of the children. And everyone else for that matter… CSS background images should only be used for presentational images, not for those that contain content (unless that content is already represented and readily available elsewhere). 2005 Derek Featherstone derekfeatherstone 2005-12-20T00:00:00+00:00 https://24ways.org/2005/naughty-or-nice-css-background-images/ code
36 Naming Things There are only two hard things in computer science: cache invalidation and naming things. Phil Karlton Being a professional web developer means taking responsibility for the code you write and ensuring it is comprehensible to others. Having a documented code style is one means of achieving this, although the size and type of project you’re working on will dictate the conventions used and how rigorously they are enforced. Working in-house may mean working with multiple developers, perhaps in distributed teams, who are all committing changes – possibly to a significant codebase – at the same time. Left unchecked, this codebase can become unwieldy. Coding conventions ensure everyone can contribute, and help build a product that works as a coherent whole. Even on smaller projects, perhaps working within an agency or by yourself, at some point the resulting product will need to be handed over to a third party. It’s sensible, therefore, to ensure that your code can be understood by those who’ll eventually take ownership of it. Put simply, code is read more often than it is written or changed. A consistent and predictable naming scheme can make code easier for other developers to understand, improve and maintain, presumably leaving them free to worry about cache invalidation. Let’s talk about semantics Names not only allow us to identify objects, but they can also help us describe the objects being identified. Semantics (the meaning or interpretation of words) is the cornerstone of standards-based web development. Using appropriate HTML elements allows us to create documents and applications that have implicit structural meaning. Thanks to HTML5, the vocabulary we can choose from has grown even larger. HTML elements provide one level of meaning: a widely accepted description of a document’s underlying structure. It’s only with the mutual agreement of browser vendors and developers that <p> indicates a paragraph. Yet (with the exception of widely accepted microdata and microformat schemas) only HTML elements convey any meaning that can be parsed consistently by user agents. While using semantic values for class names is a noble endeavour, they provide no additional information to the visitor of a website; take them away and a document will have exactly the same semantic value. I didn’t always think this was the case, but the real world has a habit of changing your opinion. Much of my thinking around semantics has been informed by the writing of my peers. In “About HTML semantics and front-end architecture”, Nicholas Gallagher wrote: The important thing for class name semantics in non-trivial applications is that they be driven by pragmatism and best serve their primary purpose – providing meaningful, flexible, and reusable presentational/behavioural hooks for developers to use. These thoughts are echoed by Harry Roberts in his CSS Guidelines: The debate surrounding semantics has raged for years, but it is important that we adopt a more pragmatic, sensible approach to naming things in order to work more efficiently and effectively. Instead of focussing on ‘semantics’, look more closely at sensibility and longevity – choose names based on ease of maintenance, not for their perceived meaning. Naming methodologies Front-end development has undergone a revolution in recent years. As the projects we’ve worked on have grown larger and more important, our development practices have matured. The pros and cons of object-orientated approaches to CSS can be endlessly debated, yet their introduction has highlighted the usefulness of having documented naming schemes. Jonathan Snook’s SMACSS (Scalable and Modular Architecture for CSS) collects style rules into five categories: base, layout, module, state and theme. This grouping makes it clear what each rule does, and is aided by a naming convention: By separating rules into the five categories, naming convention is beneficial for immediately understanding which category a particular style belongs to and its role within the overall scope of the page. On large projects, it is more likely to have styles broken up across multiple files. In these cases, naming convention also makes it easier to find which file a style belongs to. I like to use a prefix to differentiate between layout, state and module rules. For layout, I use l- but layout- would work just as well. Using prefixes like grid- also provide enough clarity to separate layout styles from other styles. For state rules, I like is- as in is-hidden or is-collapsed. This helps describe things in a very readable way. SMACSS is more a set of suggestions than a rigid framework, so its ideas can be incorporated into your own practice. Nicholas Gallagher’s SUIT CSS project is far more strict in its naming conventions: SUIT CSS relies on structured class names and meaningful hyphens (i.e., not using hyphens merely to separate words). This helps to work around the current limits of applying CSS to the DOM (i.e., the lack of style encapsulation), and to better communicate the relationships between classes. Over the last year, I’ve favoured a BEM-inspired approach to CSS. BEM stands for block, element, modifier, which describes the three types of rule that contribute to the style of a single component. This means that, given the following markup: <ul class=“sleigh”> <li class=“sleigh__reindeer sleigh__reindeer––famous”>Rudolph</li> <li class=“sleigh__reindeer”>Dasher</li> <li class=“sleigh__reindeer”>Dancer</li> <li class=“sleigh__reindeer”>Prancer</li> <li class=“sleigh__reindeer”>Vixen</li> <li class=“sleigh__reindeer”>Comet</li> <li class=“sleigh__reindeer”>Cupid</li> <li class=“sleigh__reindeer”>Dunder</li> <li class=“sleigh__reindeer”>Blixem</li> </ul> I know that: .sleigh is a containing block or component. .sleigh__reindeer is used only as a descendent element of .sleigh. .sleigh__reindeer––famous is used only as a modifier of .sleigh__reindeer. With this naming scheme in place, I know which styles relate to a particular component, and which are shared. Beyond reducing specificity-related head-scratching, this approach has given me a framework within which I can consistently label items, and has sped up my workflow considerably. Each of these methodologies shows that any robust CSS naming convention will have clear rules around case (lowercase, camelCase, PascalCase) and the use of special (allowed) characters like hyphens and underscores. What makes for a good name? Regardless of higher-level conventions, there’s no getting away from the fact that, at some point, we’re still going to have to name things. Recognising that classes should be named with other developers in mind, what makes for a good name? Understandable The most important aspect is for a name to be understandable. Words used in your project may come from a variety of sources: some may be widely understood, and others only be recognised by people working within a particular environment. Culture Most words you’ll choose will have common currency outside the world of web development, although they may have a particular interpretation among developers (think menu, list, input). However, words may have a narrower cultural significance; for example, in Germany and other German-speaking countries, impressum is the term used for legally mandated statements of ownership. Industry Industries often use specific terms to describe common business practices and concepts. Publishing has a number of these (headline, standfirst, masthead, colophon…) all have well understood meanings – and not all of them are relevant to online usage. Organisation Companies may have internal names (or nicknames) for their products and services. The Guardian is rife with such names: bisons (and buffalos), pixies (and super-pixies), bentos (and mini-bentos)… all of which mean something very different outside the organisation. Although such names can be useful inside smaller teams, in larger organisations they can become a barrier to entry, a sort of secret code used among employees who have been around long enough to know what they mean. Product Your team will undoubtedly have created names for specific features or interface components used in your product. For example, at Clearleft we coined the term gravigation for a navigation bar that was pinned to the bottom of the viewport. Elements of a visual design language may have names, too. Transport for London’s bar and circle logo is known internally as the roundel, while Nike’s logo is called the swoosh. Branding agencies often christen colours within a brand palette, too, either to evoke aspects of the identity or to indicate intended usage. Once you recognise the origin of the words you use, you’ll be better able to judge their appropriateness. Using Latin words for class names may satisfy a need to use semantic-sounding terms but, unless you work in a company whose employees have a basic grasp of Latin, a degree of translation will be required. Military ranks might be a clever way of declaring sizes without implying actual values, but I’d venture most people outside the armed forces don’t know how they’re ordered. Obvious Quite often, the first name that comes into your head will be the best option. Names that obliquely reference the function of a class (e.g. receptacle instead of container, kevlar instead of no-bullets) only serve to add an additional layer of abstraction. Don’t overthink it! One way of knowing if the names you use are well understood is to look at what similar concepts are called in existing vocabularies. schema.org, Dublin Core and the BBC’s ontologies are all useful sources for object names. Functional While we’ve learned to avoid using presentational classes, there remains a tension between naming things based on their content, and naming them for their intended presentation or behaviour (which may change at different breakpoints). Rather than think about a component’s appearance or behaviour, instead look to its function, its purpose. To clarify, ask what a component’s function is, and not how the component functions. For example, the Guardian’s internal content system uses the following names for different types of image placement: supporting, showcase and thumbnail, with inline being the default. These options make no promise of the resulting position on a webpage (or smartphone app, or television screen…), but do suggest intended use, and therefore imply the likely presentation. Consistent Being consistent in your approach to names will allow for easier naming of successive components, and extending the vocabulary when necessary. For example, a predictably named hierarchy might use names like primary and secondary. Should another level need to be added, tertiary is clearly be preferred over third. Appropriate Your project will feature a mix of style rules. Some will perform utility functions (clearing floats, removing bullets from a list, reseting margins), while others will perform specific functions used only once or twice in a project. Names should reflect this. For commonly used classes, be generic; for unique components be more specific. It’s also worth remembering that you can use multiple classes on an element, so combining both generic and specific can give you a powerful modular design system: Generic: list Specific: naughty-children Combined: naughty-children list If following the BEM methodology, you might use the following classes: Generic: list Specific: list––nice-children Combined: list list––nice-children Extensible Good naming schemes can be extended. One way of achieving this is to use namespaces, which are basically a way of grouping related names under a higher-level term. Microformats are a good example of a well-designed naming scheme, with many of its vocabularies taking property names from existing and related specifications (e.g. hCard is a 1:1 representation of vCard). Microformats 2 goes one step further by grouping properties under several namespaces: h-* for root class names (e.g. h-card) p-* for simple (text) properties (e.g. p-name) u-* for URL properties (e.g. u-photo) dt-* for date/time properties (e.g. dt-bday) e-* for embedded markup properties (e.g. e-note) The inclusion of namespaces is a massive improvement over the earlier specification, but the downside is that microformats now occupy five separate namespaces. This might be problematic if you are using u-* for your utility classes. While nothing will break, your naming system won’t be as robust, so plan accordingly. (Note: Microformats perform a very specific function, separate from any presentational concerns. It’s therefore considered best practice to not use microformat classes as styling hooks, but instead use additional classes that relate to the function of the component and adhere to your own naming conventions.) Short Names should be as long as required, but no longer. When looking for words to describe a particular function, I try to look for single words where possible. Avoid abbreviations unless they are understood within the contexts described above. rrp is fine if labelling a recommended retail price in an online shop, but not very helpful if used to mean ragged-right paragraph, for example. Fun! Finally, names can be an opportunity to have some fun! Names can give character to a project, be it by providing an outlet for in-jokes or adding little easter eggs for those inclined to look. The copyright statement on Apple’s website has long been named sosumi, a word that has a nice little history inside Apple. Until recently, the hamburger menu icon on the Guardian website was labelled honest-burger, after the developer’s favourite burger restaurant. A few thoughts on preprocessors CSS preprocessors have solved a lot of problems, but they have an unfortunate downside: they require you to name yet more things! Whereas we needed to worry only about style rules, now we need names for variables, mixins, functions… oh my! A second article could be written about naming these, so for now I’ll offer just a few thoughts. The first is to note that preprocessors make it easier to change things, as they allow for DRYer code. So while the names of variables are important (and the advice in this article still very much applies), you can afford to relax a little. Looking to name colour variables? If possible, find out if colours have been assigned names in a brand palette. If not, use obvious names (based on appearance or function, depending on your preference) and adapt as the palette grows. If it becomes difficult to name colours that are too similar, I’d venture that the problem lies with the design rather than the naming scheme. The same is true for responsive breakpoints. Preprocessors allow you to move awkward naming conventions out of the markup and into the CSS. Although terms like mobile, tablet and desktop are not desirable given the need to think about device-agnostic design, if these terms are widely understood within a product team and among stakeholders, using them will ensure everyone is using the same language (they can always be changed later). It still feels like we’re at the very beginning of understanding how preprocessors fit into a development workflow, if at all! I suspect over the next few years, best practices will emerge for all of these considerations. In the meantime, use your brain! Even with sensible rules and conventions in place, naming things can remain difficult, but hopefully I’ve made this exercise a little less painful. Christmas is a time of giving, so to the developer reading your code in a year’s time, why not make your gift one of clearer class names. 2014 Paul Lloyd paulrobertlloyd 2014-12-21T00:00:00+00:00 https://24ways.org/2014/naming-things/ code
164 My Other Christmas Present Is a Definition List A note from the editors: readers should note that the HTML5 redefinition of definition lists has come to pass and is now à la mode. Last year, I looked at how the markup for tag clouds was generally terrible. I thought this year I would look not at a method of marking up a common module, but instead just at a simple part of HTML and how it generally gets abused. No, not tables. Definition lists. Ah, definition lists. Often used but rarely understood. Examining the definition of definitions To start with, let’s see what the HTML spec has to say about them. Definition lists vary only slightly from other types of lists in that list items consist of two parts: a term and a description. The canonical example of a definition list is a dictionary. Words can have multiple descriptions (even the word definition has at least five). Also, many terms can share a single definition (for example, the word colour can also be spelt color, but they have the same definition). Excellent, we can all grasp that. But it very quickly starts to fall apart. Even in the HTML specification the definition list is mis-used. Another application of DL, for example, is for marking up dialogues, with each DT naming a speaker, and each DD containing his or her words. Wrong. Completely and utterly wrong. This is the biggest flaw in the HTML spec, along with dropping support for the start attribute on ordered lists. “Why?”, you may ask. Let me give you an example from Romeo and Juliet, act 2, scene 2. <dt>Juliet</dt> <dd>Romeo!</dd> <dt>Romeo</dt> <dd>My niesse?</dd> <dt>Juliet</dt> <dd>At what o'clock tomorrow shall I send to thee?</dd> <dt>Romeo</dt> <dd>At the hour of nine.</dd> Now, the problem here is that a given definition can have multiple descriptions (the DD). Really the dialog “descriptions” should be rolled up under the terms, like so: <dt>Juliet</dt> <dd>Romeo!</dd> <dd>At what o'clock tomorrow shall I send to thee?</dd> <dt>Romeo</dt> <dd>My niesse?</dd> <dd>At the hour of nine.</dd> Suddenly the play won’t make anywhere near as much sense. (If it’s anything, the correct markup for a play is an ordered list of CITE and BLOCKQUOTE elements.) This is the first part of the problem. That simple example has turned definition lists in everyone’s mind from pure definitions to more along the lines of a list with pre-configured heading(s) and text(s). Screen reader, enter stage left. In many screen readers, a simple definition list would be read out as “definition term equals definition description”. So in our play excerpt, Juliet equals Romeo! That’s not right, either. But this also leads a lot of people astray with definition lists to believing that they are useful for key/value pairs. Behaviour and convention The WHAT-WG have noticed the common mis-use of the DL, and have codified it into the new spec. In the HTML5 draft, a definition list is no longer a definition list. The dl element introduces an unordered association list consisting of zero or more name-value groups (a description list). Each group must consist of one or more names (dt elements) followed by one or more values (dd elements). They also note that the “dl element is inappropriate for marking up dialogue, since dialogue is ordered”. So for that example they have created a DIALOG (sic) element. Strange, then, that they keep DL as-is but instead refer to it an “association list”. They have not created a new AL element, and kept DL for the original purpose. They have chosen not to correct the usage or to create a new opportunity for increased specificity in our HTML, but to “pave the cowpath” of convention. How to use a definition list Given that everyone else is using a DL incorrectly, should we? Well, if they all jumped off a bridge, would you too? No, of course you wouldn’t. We don’t have HTML5 yet, so we’re stuck with the existing semantics of HTML4 and XHTML1. Which means that: Listing dialogue is not defining anything. Listing the attributes of a piece of hardware (resolution = 1600×1200) is illustrating sample values, not defining anything (however, stating what ‘resolution’ actually means in this context would be a definition). Listing the cast and crew of a given movie is not defining the people involved in making movies. (Stuart Gordon may have been the director of Space Truckers, but that by no means makes him the true definition of a director.) A menu of navigation items is simply a nested ordered or unordered list of links, not a definition list. Applying styling handles to form labels and elements is not a good use for a definition list. And so on. Living by the specification, a definition list should be used for term definitions – glossaries, lexicons and dictionaries – only. Anything else is a crime against markup. 2007 Mark Norman Francis marknormanfrancis 2007-12-05T00:00:00+00:00 https://24ways.org/2007/my-other-christmas-present-is-a-definition-list/ code
240 My CSS Wish List I love Christmas. I love walking around the streets of London, looking at the beautifully decorated windows, seeing the shiny lights that hang above Oxford Street and listening to Christmas songs. I’m not going to lie though. Not only do I like buying presents, I love receiving them too. I remember making long lists that I would send to Father Christmas with all of the Lego sets I wanted to get. I knew I could only get one a year, but I would spend days writing the perfect list. The years have gone by, but I still enjoy making wish lists. And I’ll tell you a little secret: my mum still asks me to send her my Christmas list every year. This time I’ve made my CSS wish list. As before, I’d be happy with just one present. Before I begin… … this list includes: things that don’t exist in the CSS specification (if they do, please let me know in the comments – I may have missed them); others that are in the spec, but it’s incomplete or lacks use cases and examples (which usually means that properties haven’t been implemented by even the most recent browsers). Like with any other wish list, the further down I go, the more unrealistic my expectations – but that doesn’t mean I can’t wish. Some of the things we wouldn’t have thought possible a few years ago have been implemented and our wishes fulfilled (think multiple backgrounds, gradients and transformations, for example). The list Cross-browser implementation of font-size-adjust When one of the fall-back fonts from your font stack is used, rather than the preferred (first) one, you can retain the aspect ratio by using this very useful property. It is incredibly helpful when the fall-back fonts are smaller or larger than the initial one, which can make layouts look less polished. What font-size-adjust does is divide the original font-size of the fall-back fonts by the font-size-adjust value. This preserves the x-height of the preferred font in the fall-back fonts. Here’s a simple example: p { font-family: Calibri, "Lucida Sans", Verdana, sans-serif; font-size-adjust: 0.47; } In this case, if the user doesn’t have Calibri installed, both Lucida Sans and Verdana will keep Calibri’s aspect ratio, based on the font’s x-height. This property is a personal favourite and one I keep pointing to. Firefox supported this property from version three. So far, it’s the only browser that does. Fontdeck provides the font-size-adjust value along with its fonts, and has a handy tool for calculating it. More control over overflowing text The text-overflow property lets you control text that overflows its container. The most common use for it is to show an ellipsis to indicate that there is more text than what is shown. To be able to use it, the container should have overflow set to something other than visible, and white-space: nowrap: div { white-space: nowrap; width: 100%; overflow: hidden; text-overflow: ellipsis; } This, however, only works for blocks of text on a single line. In the wish list of many CSS authors (and in mine) is a way of defining text-overflow: ellipsis on a block of multiple text lines. Opera has taken the first step and added support for the -o-ellipsis-lastline property, which can be used instead of ellipsis. This property is not part of the CSS3 spec, but we could certainly make good use of it if it were… WebKit has -webkit-line-clamp to specify how many lines to show before cutting with an ellipsis, but support is patchy at best and there is no control over where the ellipsis shows in the text. Many people have spent time wrangling JavaScript to do this for us, but the methods used are very processor intensive, and introduce a JavaScript dependency. Indentation and hanging punctuation properties You might notice a trend here: almost half of the items in this list relate to typography. The lack of fine-grained control over typographical detail is a general concern among designers and CSS authors. Indentation and hanging punctuation fall into this category. The CSS3 specification introduces two new possible values for the text-indent property: each-line; and hanging. each-line would indent the first line of the block container and each line after a forced line break; hanging would invert which lines are affected by the indentation. The proposed hanging-punctuation property would allow us to specify whether opening and closing brackets and quotes should hang outside the edge of the first and last lines. The specification is still incomplete, though, and asks for more examples and use cases. Text alignment and hyphenation properties Following the typographic trend of this list, I’d like to add better control over text alignment and hyphenation properties. The CSS3 module on Generated Content for Paged Media already specifies five new hyphenation-related properties (namely: hyphenate-dictionary; hyphenate-before and hyphenate-after; hyphenate-lines; and hyphenate-character), but it is still being developed and lacks examples. In the text alignment realm, the new text-align-last property allows you to define how the last line of a block (or a line just before a forced break) is aligned, if your text is set to justify. Its value can be: start; end; left; right; center; and justify. The text-justify property should also allow you to have more control over text set to text-align: justify but, for now, only Internet Explorer supports this. calc() This is probably my favourite item in the list: the calc() function. This function is part of the CSS3 Values and Units module, but it has only been implemented by Firefox (4.0). To take advantage of it now you need to use the Mozilla vendor code, -moz-calc(). Imagine you have a fluid two-column layout where the sidebar column has a fixed width of 240 pixels, and the main content area fills the rest of the width available. This is how you could create that using -moz-calc(): #main { width: -moz-calc(100% - 240px); } Can you imagine how many hacks and headaches we could avoid were this function available in more browsers? Transitions and animations are really nice and lovely but, for me, it’s the ability to do the things that calc() allows you to that deserves the spotlight and to be pushed for implementation. Selector grouping with -moz-any() The -moz-any() selector grouping has been introduced by Mozilla but it’s not part of any CSS specification (yet?); it’s currently only available on Firefox 4. This would be especially useful with the way HTML5 outlines documents, where we can have any number of variations of several levels of headings within numerous types of containers (think sections within articles within sections…). Here is a quick example (copied from the Mozilla blog post about the article) of how -moz-any() works. Instead of writing: section section h1, section article h1, section aside h1, section nav h1, article section h1, article article h1, article aside h1, article nav h1, aside section h1, aside article h1, aside aside h1, aside nav h1, nav section h1, nav article h1, nav aside h1, nav nav h1, { font-size: 24px; } You could simply write: -moz-any(section, article, aside, nav) -moz-any(section, article, aside, nav) h1 { font-size: 24px; } Nice, huh? More control over styling form elements Some are of the opinion that form elements shouldn’t be styled at all, since a user might not recognise them as such if they don’t match the operating system’s controls. I partially agree: I’d rather put the choice in the hands of designers and expect them to be capable of deciding whether their particular design hampers or improves usability. I would say the same idea applies to font-face: while some fear designers might go crazy and litter their web pages with dozens of different fonts, most welcome the freedom to use something other than Arial or Verdana. There will always be someone who will take this freedom too far, but it would be useful if we could, for example, style the default Opera date picker: <input type="date" /> or Safari’s slider control (think star movie ratings, for example): <input type="range" min="0" max="5" step="1" value="3" /> Parent selector I don’t think there is one CSS author out there who has never come across a case where he or she wished there was a parent selector. There have been many suggestions as to how this could work, but a variation of the child selector is usually the most popular: article < h1 { … } One can dream… Flexible box layout The Flexible Box Layout Module sounds a bit like magic: it introduces a new box model to CSS, allowing you to distribute and order boxes inside other boxes, and determine how the available space is shared. Two of my favourite features of this new box model are: the ability to redistribute boxes in a different order from the markup the ability to create flexible layouts, where boxes shrink (or expand) to fill the available space Let’s take a quick look at the second case. Imagine you have a three-column layout, where the first column takes up twice as much horizontal space as the other two: <body> <section id="main"> </section> <section id="links"> </section> <aside> </aside> </body> With the flexible box model, you could specify it like this: body { display: box; box-orient: horizontal; } #main { box-flex: 2; } #links { box-flex: 1; } aside { box-flex: 1; } If you decide to add a fourth column to this layout, there is no need to recalculate units or percentages, it’s as easy as that. Browser support for this property is still in its early stages (Firefox and WebKit need their vendor prefixes), but we should start to see it being gradually introduced as more attention is drawn to it (I’m looking at you…). You can read a more comprehensive write-up about this property on the Mozilla developer blog. It’s easy to understand why it’s harder to start playing with this module than with things like animations or other more decorative properties, which don’t really break your layouts when users don’t see them. But it’s important that we do, even if only in very experimental projects. Nested selectors Anyone who has never wished they could do something like the following in CSS, cast the first stone: article { h1 { font-size: 1.2em; } ul { margin-bottom: 1.2em; } } Even though it can easily turn into a specificity nightmare and promote redundancy in your style sheets (if you abuse it), it’s easy to see how nested selectors could be useful. CSS compilers such as Less or Sass let you do this already, but not everyone wants or can use these compilers in their projects. Every wish list has an item that could easily be dropped. In my case, I would say this is one that I would ditch first – it’s the least useful, and also the one that could cause more maintenance problems. But it could be nice. Implementation of the ::marker pseudo-element The CSS Lists module introduces the ::marker pseudo-element, that allows you to create custom list item markers. When an element’s display property is set to list-item, this pseudo-element is created. Using the ::marker pseudo-element you could create something like the following: Footnote 1: Both John Locke and his father, Anthony Cooper, are named after 17th- and 18th-century English philosophers; the real Anthony Cooper was educated as a boy by the real John Locke. Footnote 2: Parts of the plane were used as percussion instruments and can be heard in the soundtrack. where the footnote marker is generated by the following CSS: li::marker { content: "Footnote " counter(notes) ":"; text-align: left; width: 12em; } li { counter-increment: notes; } You can read more about how to use counters in CSS in my article from last year. Bear in mind that the CSS Lists module is still a Working Draft and is listed as “Low priority”. I did say this wish list would start to grow more unrealistic closer to the end… Variables The sight of the word ‘variables’ may make some web designers shy away, but when you think of them applied to things such as repeated colours in your stylesheets, it’s easy to see how having variables available in CSS could be useful. Think of a website where the main brand colour is applied to elements like the main text, headings, section backgrounds, borders, and so on. In a particularly large website, where the colour is repeated countless times in the CSS and where it’s important to keep the colour consistent, using variables would be ideal (some big websites are already doing this by using server-side technology). Again, Less and Sass allow you to use variables in your CSS but, again, not everyone can (or wants to) use these. If you are using Less, you could, for instance, set the font-family value in one variable, and simply call that variable later in the code, instead of repeating the complete font stack, like so: @fontFamily: Calibri, "Lucida Grande", "Lucida Sans Unicode", Helvetica, Arial, sans-serif; body { font-family: @fontFamily; } Other features of these CSS compilers might also be useful, like the ability to ‘call’ a property value from another selector (accessors): header { background: #000000; } footer { background: header['background']; } or the ability to define functions (with arguments), saving you from writing large blocks of code when you need to write something like, for example, a CSS gradient: .gradient (@start:"", @end:"") { background: -webkit-gradient(linear, left top, left bottom, from(@start), to(@end)); background: -moz-linear-gradient(-90deg,@start,@end); } button { .gradient(#D0D0D0,#9F9F9F); } Standardised comments Each CSS author has his or her own style for commenting their style sheets. While this isn’t a massive problem on smaller projects, where maybe only one person will edit the CSS, in larger scale projects, where dozens of hands touch the code, it would be nice to start seeing a more standardised way of commenting. One attempt at creating a standard for CSS comments is CSSDOC, an adaptation of Javadoc (a documentation generator that extracts comments from Java source code into HTML). CSSDOC uses ‘DocBlocks’, a term borrowed from the phpDocumentor Project. A DocBlock is a human- and machine-readable block of data which has the following structure: /** * Short description * * Long description (this can have multiple lines and contain <p> tags * * @tags (optional) */ CSSDOC includes a standard for documenting bug fixes and hacks, colours, versioning and copyright information, amongst other important bits of data. I know this isn’t a CSS feature request per se; rather, it’s just me pointing you at something that is usually overlooked but that could contribute towards keeping style sheets easier to maintain and to hand over to new developers. Final notes I understand that if even some of these were implemented in browsers now, it would be a long time until all vendors were up to speed. But if we don’t talk about them and experiment with what’s available, then it will definitely never happen. Why haven’t I mentioned better browser support for existing CSS3 properties? Because that would be the same as adding chocolate to your Christmas wish list – you don’t need to ask, everyone knows you want it. The list could go on. There are dozens of other things I would love to see integrated in CSS or further developed. These are my personal favourites: some might be less useful than others, but I’ve wished for all of them at some point. Part of the research I did while writing this article was asking some friends what they would add to their lists; other than a couple of items I already had in mine, everything else was different. I’m sure your list would be different too. So tell me, what’s on your CSS wish list? 2010 Inayaili de León Persson inayailideleon 2010-12-03T00:00:00+00:00 https://24ways.org/2010/my-css-wish-list/ code
100 Moo'y Christmas A note from the editors: Moo has changed their API since this article was written. As the web matures, it is less and less just about the virtual world. It is becoming entangled with our world and it is harder to tell what is virtual and what is real. There are several companies who are blurring this line and make the virtual just an extension of the physical. Moo is one such company. Moo offers simple print on demand services. You can print business cards, moo mini cards, stickers, postcards and more. They give you the ability to upload your images, customize them, then have them sent to your door. Many companies allow this sort of digital to physical interaction, but Moo has taken it one step further and has built an API. Printable stocking stuffers The Moo API consists of a simple XML file that is sent to their servers. It describes all the information needed to dynamically assemble and print your object. This is very helpful, not just for when you want to print your own stickers, but when you want to offer them to your customers, friends, organization or community with no hassle. Moo handles the check-out and shipping, all you need to do is what you do best, create! Now using an API sounds complicated, but it is actually very easy. I am going to walk you through the options so you can easily be printing in no time. Before you can begin sending data to the Moo API, you need to register and get an API key. This is important, because it allows Moo to track usage and to credit you. To register, visit http://www.moo.com/api/ and click “Request an API key”. In the following examples, I will use {YOUR API KEY HERE} as a place holder, replace that with your API key and everything will work fine. First thing you need to do is to create an XML file to describe the check-out basket. Open any text-editor and start with some XML basics. Don’t worry, this is pretty simple and Moo gives you a few tools to check your XML for errors before you order. <?xml version="1.0" encoding="UTF-8"?> <moo xsi:noNamespaceSchemaLocation="http://www.moo.com/xsd/api_0.7.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <request> <version>0.7</version> <api_key>{YOUR API KEY HERE}</api_key> <call>build</call> <return_to>http://www.example.com/return.html</return_to> <fail_to>http://www.example.com/fail.html</fail_to> </request> <payload> ... </payload> </moo> Much like HTML’s <head> and <body>, Moo has created <request> and <payload> elements all wrapped in a <moo> element. The <request> element contains a few pieces of information that is the same across all the API calls. The <version> element describes which version of the API is being used. This is more important for Moo than for you, so just stick with “0.7” for now. The <api_key> allows Moo to track sales, referrers and credit your account. The <call> element can only take “build” so that is pretty straight forward. The <return_to> and <fail_to> elements are URLs. These are optional and are the URLs the customer is redirected to if there is an error, or when the check out process is complete. This allows for some basic branding and a custom “thank you” page which is under your control. That’s it for the <request> element, pretty easy so far! Next up is the <payload> element. What goes inside here describes what is to be printed. There are two possible elements, we can put <chooser> or we can put <products> directly inside <payload>. They work in a similar ways, but they drop the customer into different parts of the Moo checkout process. If you specify <products> then you send the customer straight to the Moo payment process. If you specify <chooser> then you send the customer one-step earlier where they are allowed to pick and choose some images, remove the ones they don’t like, adjust the crop, etc. The example here will use <chooser> but with a little bit of homework you can easily adjust to <products> if you desire. ... <chooser> <product_type>sticker</product_type> <images> <url>http://example.com/images/christmas1.jpg</url> </images> </chooser> ... Inside the <chooser> element, we can see there are two basic piece of information. The type of product we want to print, and the images that are to be printed. The <product_type> element can take one of five options and is required! The possibilities are: minicard, notecard, sticker, postcard or greetingcard. We’ll now look at two of these more closely. Moo Stickers In the Moo sticker books you get 90 small squarish stickers in a small little booklet. The simplest XML you could send would be something like the following payload: ... <payload> <chooser> <product_type>sticker</product_type> <images> <url>http://example.com/image1.jpg</url> </images> <images> <url>http://example.com/image2.jpg</url> </images> <images> <url>http://example.com/image3.jpg</url> </images> </chooser> </payload> ... This creates a sticker book with only 3 unique images, but 30 copies of each image. The Sticker books always print 90 stickers in multiples of the images you uploaded. That example only has 3 <images> elements, but you can easily duplicate the XML and send up to 90. The <url> should be the full path to your image and the image needs to be a minimum of 300 pixels by 300 pixels. You can add more XML to describe cropping, but the simplest option is to either, let your customers choose or to pre-crop all your images square so there are no issues. The full XML you would post to the Moo API to print sticker books would look like this: <?xml version="1.0" encoding="UTF-8"?> <moo xsi:noNamespaceSchemaLocation="http://www.moo.com/xsd/api_0.7.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <request> <version>0.7</version> <api_key>{YOUR API KEY HERE}</api_key> <call>build</call> <return_to>http://www.example.com/return.html</return_to> <fail_to>http://www.example.com/fail.html</fail_to> </request> <payload> <chooser> <product_type>sticker</product_type> <images> <url>http://example.com/image1.jpg</url> </images> <images> <url>http://example.com/image2.jpg</url> </images> <images> <url>http://example.com/image3.jpg</url> </images> </chooser> </payload> </moo> Mini-cards The mini-cards are the small cute business cards in 14×35 dimensions and come in packs of 100. Since the mini-cards are print on demand, this allows you to have 100 unique images on the back of the cards. Just like the stickers example, we need the same XML setup. The <moo> element and <request> elements will be the same as before. The part you will focus on is the <payload> section. Since you are sending along specific information, we can’t use the <chooser> option any more. Switch this to <products> which has a child of <product>, which in turn has a <product_type> and <designs>. This might seem like a lot of work, but once you have it set up you won’t need to change it. ... <payload> <products> <product> <product_type>minicard</product_type> <designs> ... </designs> </product> </products> </payload> ... So now that we have the basic framework, we can talk about the information specific to minicards. Inside the <designs> element, you will have one <design> for each card. Much like before, this contains a way to describe the image. Note that this time the element is called  </design> ... So far, we have managed to build a pack of 100 Moo mini-cards with the same image on the front. If you wanted 100 different images, you just need to replicate this snippit, 99 more times. That describes the front design, but the flip-side of your mini-cards can contain 6 lines of text, which is customizable in a variety of colors, fonts and styles. The API allows you to create different text on the back of each mini-card, something the web interface doesn’t implement. To describe the text on the mini-card we need to add a <text_collection> element inside the <design> element. If you skip this element, the back of your mini-card will just be blank, but that’s not very festive! Inside the <text_collection> element, we need to describe the type of text we want to format, so we add a <minicard> element, which in turn contains all the lines of text. Each of Moo’s printed products take different numbers of lines of text, so if you are not planning on making mini-cards, be sure to consult the documentation. For mini-cards, we can have 6 distinct lines, each with their own style and layout. Each line is represented by an element <text_line> which has several optional children. The <id> tells which line of the 6 to print the text one. The <string> is the text you want to print and it must be shorter than 38 characters. The <bold> element is false by default, but if you want your text bolded, then add this and set it to true. The <align> element is also optional. By default it is set to align left. You can also set this to right or center if you desirer. The <font> element takes one of 3 types, modern, traditional or typewriter. The default is modern. Finally, you can set the <colour>, yes that’s color with a ‘u’, Moo is a British company, so they get to make the rules. When you start a print on demand company, you can spell it however you want. The <colour> element takes a 6 character hex value with a leading #. <design> ... <text_collection> <minicard> <text_line> <id>(1-6)</id> <string>String, I must be less than 38 chars!</string> <bold>true</bold> <align>left</align> <font>modern</font> <colour>#ff0000</colour> </text_line> </minicard> </text_collection> </design> If you combine all of this into a mini-card request you’d get this example: <?xml version="1.0" encoding="UTF-8"?> <moo xsi:noNamespaceSchemaLocation="http://www.moo.com/xsd/api_0.7.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <request> <version>0.7</version> <api_key>{YOUR API KEY HERE}</api_key> <call>build</call> <return_to>http://www.example.com/return.html</return_to> <fail_to>http://www.example.com/fail.html</fail_to> </request> <payload> <products> <product> <product_type>minicard</product_type> <designs> <design>  <text_collection> <minicard> <text_line> <id>1</id> <string>String, I must be less than 38 chars!</string> <bold>true</bold> <align>left</align> <font>modern</font> <colour>#ff0000</colour> </text_line> </minicard> </text_collection> </design> </designs> </product> </products> </payload> </moo> Now you know how to construct the XML that describes what to print. Next, you need to know how to send it to Moo to make it happen! Posting to the API So your XML is file ready to go. First thing we need to do is check it to make sure it’s valid. Moo has created a simple validator where you paste in your XML, and it alerts you to problems. When you have a fully valid XML file, you’ll want to send that to the Moo API. There are a few ways to do this, but the simplest is with an HTML form. This is the sample code for an HTML form with a big “Buy My Stickers” button. Once you know that it is working, you can use all your existing HTML knowledge to style it up any way you like. <form method="POST" action="http://www.moo.com/api/api.php"> <input type="hidden" name="xml" value="<?xml version="1.0" encoding="UTF-8"?> <moo xsi:noNamespaceSchemaLocation="http://www.moo.com/xsd/api_0.7.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <request>....</request> <payload>...</payload> </moo> "> <input type="submit" name="submit" value="Buy My Stickers"/> </form> This is just a basic <form> element that submits to the Moo API, http://www.moo.com/api/api.php, when someone clicks the button. There is a hidden input called “xml” which contains the value the XML file we created previously. For those of you who need to “view source” to fully understand what’s happening can see a working version and peek under the hood. Using the API has advantages over uploading the images directly yourself. The images and text that you send via the API can be dynamic. Some companies, like Dopplr, have taken user profiles and dynamic data that changes every minute to generate customer stickers of places that you’ve travelled to or mini-cards with a world map of all the cities you have visited. Every single customer has different travel plans and therefore different sets of stickers and mini-card maps. The API allows for the utmost current information to be printed, on demand, in real-time. Go forth and Moo’ltiply See, making an API call wasn’t that hard was it? You are now 90% of the way to creating anything with the Moo API. With a bit of reading, you can learn that extra 10% and print any Moo product. Be on the lookout in 2009 for the official release of the 1.0 API with improvements and some extras that were not available when this article was written. This article is released under the creative-commons attribution share-a-like license. That means you are free to re-distribute it, mash it up, translate it and otherwise re-using it ways the author never considered, in return he only asks you mention his name. This work by Brian Suda is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. 2008 Brian Suda briansuda 2008-12-19T00:00:00+00:00 https://24ways.org/2008/mooy-christmas/ code
90 Monkey Business “Too expensive.” “Over-priced.” “A bit rich.” They all mean the same thing. When you say that something’s too expensive, you’re doing much more than commenting on a price. You’re questioning the explicit or implicit value of a product or a service. You’re asking, “Will I get out of it what you want me to pay for it?” You’re questioning the competency, judgement and possibly even integrity of the individual or company that gave you that price, even though you don’t realise it. You might not be saying it explicitly, but what you’re implying is, “Have you made a mistake?”, “Am I getting the best deal?”, “Are you being honest with me?”, “Could I get this cheaper?” Finally, you’re being dishonest, because deep down you know all too well that there’s no such thing as too expensive. Why? It doesn’t matter what you’re questioning the price of. It could be a product, a service or the cost of an hour, day or week of someone’s time. Whatever you’re buying, too expensive is always an excuse. Saying it shifts acceptability of a price back to the person who gave it. What you should say, but are too afraid to admit, is: “It’s more money than I wanted to pay.” “It’s more than I estimated it would cost.” “It’s more than I can afford.” Everyone who’s given a price for a product or service will have been told at some point that it’s too expensive. It’s never comfortable to hear that. Thoughts come thick and fast: “What do I do?” “How do I react?” “Do I really want the business?” “Am I prepared to negotiate?” “How much am I willing to compromise?” It’s easy to be defensive when someone questions a price, but before you react, stay calm and remember that if someone says what you’re offering is too expensive, they’re saying more about themselves and their situation than they are about your price. Learn to read that situation and how to follow up with the right questions. Imagine you’ve quoted someone for a week of your time. “That’s too expensive,” they respond. How should you handle that? Think about what they might otherwise be saying. “It’s more money than I want to pay” may mean that they don’t understand the value of your service. How could you respond? Start by asking what similar projects they’ve worked on and the type of people they worked with. Find out what they paid and what they got for their money, because it’s possible what you offer is different from what they had before. Ask if they saw a return on that previous investment. Maybe their problem isn’t with your headline price, but the value they think they’ll receive. Put the emphasis on value and shift the conversation to what they’ll gain, rather than what they’ll spend. It’s also possible they can’t distinguish your service from those of your competitors, so now would be a great time to explain the differences. Do you work faster? Explain how that could help them launch faster, get customers faster, make money faster. Do you include more? Emphasise that, and how unique the experience of working with you will be. “It’s more than I estimated it would cost” could mean that your customer hasn’t done their research properly. You’d never suggest that to them, of course, but you should ask how they’ve arrived at their estimate. Did they base it on work they’ve purchased previously? How long ago was that? Does it come from comparable work or from a different sector? Help your customer by explaining how you arrived at your estimate. Break down each element and while you’re doing that, emphasise the parts of your process that you know will appeal to them. If you know that they’ve had difficulty with something in the past, explain how your approach will benefit them. People almost always value a positive experience more than the money they’ll save. “It’s more than I can afford” could mean they can’t afford what you offer at all, but it could also mean they can’t afford it right now or all at once. So ask if they could afford what you’re asking if they spread payment over a longer period? Ask, “Would that mean you’ll give me the business?” It’s possible they’re asking for too much for what they can afford to pay. Will they compromise? Can you reach an agreement on something less? Ask, “If we can agree what’s in and what’s out, will you give me the business?” What can they afford? When you know, you’re in a good position to decide if the deal makes good business sense, for both of you. Ask, “If I can match that price, will you give me the business?” There’s no such thing as “a bit rich”, only ways for you to get to know your customer better. There’s no such thing as “over-priced”, only opportunities for you to explain yourself better. You should relish those opportunities. There’s really also no such thing as “too expensive”, just ways to set the tone for your relationship and help you develop that relationship to a point where money will be less of a deciding factor. Unfinished Business Join me and my co-host Anna Debenham next year for Unfinished Business, a new discussion show about the business end of working in web, design and creative industries. 2012 Andy Clarke andyclarke 2012-12-23T00:00:00+00:00 https://24ways.org/2012/monkey-business/ business
156 Mobile 2.0 Thinking 2.0 As web geeks, we have a thick skin towards jargon. We all know that “Web 2.0” has been done to death. At Blue Flavor we even have a jargon bucket to penalize those who utter such painfully overused jargon with a cash deposit. But Web 2.0 is a term that has lodged itself into the conscience of the masses. This is actually a good thing. The 2.0 suffix was able to succinctly summarize all that was wrong with the Web during the dot-com era as well as the next evolution of an evolving media. While the core technologies actually stayed basically the same, the principles, concepts, interactions and contexts were radically different. With that in mind, this Christmas I want to introduce to you the concept of Mobile 2.0. While not exactly a new concept in the mobile community, it is relatively unknown in the web community. And since the foundation of Mobile 2.0 is the web, I figured it was about time for you to get to know each other. It’s the Carriers’ world. We just live in it. Before getting into Mobile 2.0, I thought first I should introduce you to its older brother. You know the kind, the kid with emotional problems that likes to beat up on you and your friends for absolutely no reason. That is the mobile of today. The mobile ecosystem is a very complicated space often and incorrectly compared to the Web. If the Web was a freewheeling hippie — believing in freedom of information and the unity of man through communities — then Mobile is the cutthroat capitalist — out to pillage and plunder for the sake of the almighty dollar. Where the Web is relatively easy to publish to and ultimately make a buck, Mobile is wrought with layers of complexity, politics and obstacles. I can think of no better way to summarize these challenges than the testimony of Jason Devitt to the United States Congress in what is now being referred to as the “iPhone Hearing.” Jason is the co-founder and CEO of SkyDeck a new wireless startup and former CEO of Vindigo an early pioneer in mobile content. As Jason points out, the mobile ecosystem is a closed door environment controlled by the carriers, forcing the independent publisher to compete or succumb to the will of corporate behemoths. But that is all about to change. Introducing Mobile 2.0 Mobile 2.0 is term used by the mobile community to describe the current revolution happening in mobile. It describes the convergence of mobile and web services, adding portability, ubiquitous connectivity and location-aware services to add physical context to information found on the Web. It’s an important term that looks toward the future. Allowing us to imagine the possibilities that mobile technology has long promised but has yet to deliver. It imagines a world where developers can publish mobile content without the current constraints of the mobile ecosystem. Like the transition from Web 1.0 to 2.0, it signifies the shift away from corporate or brand-centered experiences to user-centered experiences. A focus on richer interactions, driven by user goals. Moving away from proprietary technologies to more open and standard ones, more akin to the Web. And most importantly (from our perspective as web geeks) a shift away from kludgy one-off mobile applications toward using the Web as a platform for content and services. This means the world of the Web and the world of Mobile are coming together faster than you can say ARPU (Average Revenue Per User, a staple mobile term to you webbies). And this couldn’t come at a better time. The importance of understanding and addressing user context is quickly becoming a crucial consideration to every interactive experience as the number of ways we access information on the Web increases. Mobile enables the power of the Web, the collective information of millions of people, inherit payment channels and access to just about every other mass media to literally be overlaid on top of the physical world, in context to the person viewing it. Anyone who can’t imagine how the influence of mobile technology can’t transform how we perform even the simplest of daily tasks needs to get away from their desktop and see the new evolution of information. The Instigators But what will make Mobile 2.0 move from idillic concept to a hardened market reality in 2008 will be four key technologies. Its my guess that you know each them already. 1. Opera Opera is like the little train that could. They have been a driving force on moving the Web as we know it on to mobile handsets. Opera technology has proven itself to be highly adaptable, finding itself preloaded on over 40 million handsets, available on televisions sets through Nintendo Wii or via the Nintendo DS. 2. WebKit Many were surprised when Apple chose to use KHTML instead of Gecko (the guts of Firefox) to power their Safari rendering engine. But WebKit has quickly evolved to be a powerful and flexible browser in the mobile context. WebKit has been in Nokia smartphones for a few years now, is the technology behind Mobile Safari in the iPhone and the iPod Touch and is the default web technology in Google’s open mobile platform effort, Android. 3. The iPhone The iPhone has finally brought the concepts and principles of Mobile 2.0 into the forefront of consumers minds and therefore developers’ minds as well. Over 500 web applications have been written specifically for the iPhone since its launch. It’s completely unheard of to see so many applications built for the mobile context in such a short period of time. 4. CSS & Javascript Web 2.0 could not exist without the rich interactions offered by CSS and Javascript, and Mobile 2.0 is no different. CSS and Javascript support across multiple phones historically has been, well… to put it positively… utter crap. Javascript finally allows developers to create interesting interactions that support user goals and the mobile context. Specially, AJAX allows us to finally shed the days of bloated Java applications and focus on portable and flexible web applications. While CSS — namely CSS3 — allows us to create designs that are as beautiful as they are economical with bandwidth and load times. With Leaflets, a collection of iPhone optimized web apps we created, we heavily relied on CSS3 to cache and reuse design elements over and over, minimizing download times while providing an elegant and user-centered design. In Conclusion It is the combination of all these instigators that is significantly decreasing the bar to mobile publishing. The market as Jason Devitt describes it, will begin to fade into the background. And maybe the world of mobile will finally start looking more like the Web that we all know and love. So after the merriment and celebration of the holiday is over and you look toward the new year to refresh and renew, I hope that you take a seriously consider the mobile medium. By this time next year, it is predicted that one-third of humanity will be using mobile devices to access the Web. 2007 Brian Fling brianfling 2007-12-21T00:00:00+00:00 https://24ways.org/2007/mobile-2-0/ business
258 Mistletoe Offline It’s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season. I am, of course, talking about Murphy, and the golden rule he gave unto us: Anything that can go wrong will go wrong. So true! I mean, that’s why we make sure we’ve got nice 404 pages. It’s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it’s bound to happen sometime. Murphy’s Law, innit? But there are some Murphyesque situations where even your lovingly crafted 404 page won’t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can—and will—go wrong. I guess there’s nothing we can do about those particular situations, right? Wrong! A service worker is a Murphy-battling technology that you can inject into a visitor’s device from your website. Once it’s installed, it can intercept any requests made to your domain. If anything goes wrong with a request—as is inevitable—you can provide instructions for the browser. That’s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade. If you’ve got a custom 404 page, why not make a custom offline page too? Get your server in order Step one is to make …actually, wait. There’s a step before that. Step zero. Get your site running on HTTPS, if it isn’t already. You won’t be able to use a service worker unless everything’s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields. If you’re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must. Make an offline page Alright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here’s an example of the custom offline page for this year’s Ampersand conference. When you’re done, publish the offline page at suitably imaginative URL, like, say /offline.html. Pre-cache your offline page Now create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user’s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener: addEventListener('install', installEvent => { // put your instructions here. }); // end addEventListener In this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I’m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code: addEventListener('install', installEvent => { installEvent.waitUntil( caches.open('Johnny') .then( JohnnyCache => { JohnnyCache.addAll([ '/offline.html' ]); // end addAll }) // end open.then ); // end waitUntil }); // end addEventListener I’m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point: addEventListener('install', installEvent => { installEvent.waitUntil( caches.open('Johnny') .then( JohnnyCache => { JohnnyCache.addAll([ '/offline.html', '/path/to/stylesheet.css', '/path/to/javascript.js', '/path/to/image.jpg' ]); // end addAll }) // end open.then ); // end waitUntil }); // end addEventListener Make sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached. Intercept requests The next event you want to listen for is the fetch event. This is probably the most powerful—and, let’s be honest, the creepiest—feature of a service worker. Once it has been installed, the service worker lurks on the user’s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time: addEventListener('fetch', fetchEvent => { // What happens next is up to you! }); // end addEventListener Let’s write a fairly conservative script with the following logic: Whenever a file is requested, First, try to fetch it from the network, But if that doesn’t work, try to find it in the cache, But if that doesn’t work, and it’s a request for a web page, show the custom offline page instead. Here’s how that translates into JavaScript: // Whenever a file is requested addEventListener('fetch', fetchEvent => { const request = fetchEvent.request; fetchEvent.respondWith( // First, try to fetch it from the network fetch(request) .then( responseFromFetch => { return responseFromFetch; }) // end fetch.then // But if that doesn't work .catch( fetchError => { // try to find it in the cache caches.match(request) .then( responseFromCache => { if (responseFromCache) { return responseFromCache; // But if that doesn't work } else { // and it's a request for a web page if (request.headers.get('Accept').includes('text/html')) { // show the custom offline page instead return caches.match('/offline.html'); } // end if } // end if/else }) // end match.then }) // end fetch.catch ); // end respondWith }); // end addEventListener I am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what’s happening at each point in the code, I’ve written a whole book for you. It’s the perfect present for Murphymas. Hook up your service worker script You can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you’re calling in to every page on your site, or add this in a script element at the end of every page’s HTML: if (navigator.serviceWorker) { navigator.serviceWorker.register('/serviceworker.js'); } That tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend. You might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don’t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you’re making sure that every request can be intercepted. Go further! Nicely done! You’ve made sure that if—no, when—a visitor can’t reach your website, they’ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe! This is just the beginning. You can do more with service workers. What if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they’re offline, you could show them the cached version. Or, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version—which would be blazingly fast—and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images. So many options! The hard part isn’t writing the code, it’s figuring out the steps you want to take. Once you’ve got those steps written out, then it’s a matter of translating them into JavaScript. Inevitably there will be some obstacles along the way—usually it’s a misplaced curly brace or a missing parenthesis. Don’t be too hard on yourself if your code doesn’t work at first. That’s just Murphy’s Law in action. 2018 Jeremy Keith jeremykeith 2018-12-04T00:00:00+00:00 https://24ways.org/2018/mistletoe-offline/ code
155 Minification: A Christmas Diet The festive season is generally more about gorging ourselves than staying thin but we’re going to change all that with a quick introduction to minification. Performance has been a hot topic this last year. We’re building more complex sites and applications but at the same time trying to make then load faster and behave more responsively. What is a discerning web developer to do? Minification is the process of make something smaller, in the case of web site performance we’re talking about reducing the size of files we send to the browser. The primary front-end components of any website are HTML, CSS, Javascript and a sprinkling of images. Let’s find some tools to trim the fat and speed up our sites. For those that want to play along at home you can download the various utilities for Mac or Windows. You’ll want to be familiar with running apps on the command line too. HTMLTidy HTMLTidy optimises and strips white space from HTML documents. It also has a pretty good go at correcting any invalid markup while it’s at it. tidy -m page.html CSSTidy CSSTidy takes your CSS file, optimises individual rules (for instance transforming padding-top: 10px; padding-bottom: 10px; to padding: 10px 0;) and strips unneeded white space. csstidy style.css style-min.css JSMin JSMin takes your javascript and makes it more compact. With more and more websites using javascript to power (progressive) enhancements this can be a real bandwidth hog. Look out for pre-minified versions of libraries and frameworks too. jsmin <script.js >script-min.js Remember to run JSLint before you run JSMin to catch some common problems. OptiPNG Images can be a real bandwidth hog and making all of them smaller with OptiPNG should speed up your site. optipng image.png All of these tools have an often bewildering array of options and generally good documentation included as part of the package. A little experimentation will get you even more bang for your buck. For larger projects you likely won’t want to be manually minifying all your files. The best approach here is to integrate these tools into your build process and have your live website come out the other side smaller than it went in. You can also do things on the server to speed things up; GZIP compression for instance or compilation of resources to reduce the number of HTTP requests. If you’re interested in performance a good starting point is the Exceptional Performance section on the Yahoo Developer Network and remember to install the YSlow Firebug extension while you’re at it. 2007 Gareth Rushgrove garethrushgrove 2007-12-06T00:00:00+00:00 https://24ways.org/2007/minification-a-christmas-diet/ process
39 Meet for Learning “I’ve never worked in a place like this,” said one of my direct reports during our daily stand-up meeting. And with that statement, my mind raced to the most important thing about lawyering that I’ve learned from decades of watching lawyers lawyer on TV: don’t ask a question you don’t know the answer to. But I couldn’t stop myself. I wanted to learn more. The thought developed in my mind. The words formed in my mouth. And the vocalization occurred: “A place like this?” “I’ve never worked where people are so honest and transparent about things.” Designing a learning-centered culture Before we started Center Centre, Jared Spool and I discussed both the larger goals and the smaller details of this new UX design school. We talked about things like user experience, curriculum, and structure. We discussed the pattern we saw in our research. Hiring managers told us time and again that great designers have excellent technical and interpersonal skills. But, more importantly, the best designers are lifelong learners—they are willing and able to learn how to do new things. Learning this led us to ask a critical question: how would we intentionally design a learning-centered experience? To craft the experience we were aiming for, we knew we had to create a learning-centered culture for our students and our employees. We knew that our staff would need to model the behaviors our students needed to learn. We knew the best way to shape the culture was to work with our direct reports—our directs—to develop the behaviors we wanted them to exemplify. To craft the experience we were aiming for, we knew we had to create a learning-centered culture for our students and our employees. We knew that our staff would need to model the behaviors our students needed to learn. Building a learning team Our learning-centered culture starts with our staff. We believe in transparency. Transparency builds trust. Effective organizations have effective teams who trust each other as individuals. One huge way we build that trust and provide opportunities for transparency is in our meetings. (I know, I know—meetings! Yuck!) But seriously, running and participating in effective meetings is a great opportunity to build a learning-centered culture. Meetings—when done well—allow individuals time to come together, to share, and to listen. These behaviors, executed on a consistent and regular basis, build honest and trusting relationships. An effective meeting is one that achieves the desired outcomes of that meeting. While different meetings aim for different results, at Center Centre all meetings have a secondary goal: meet for learning. A framework for learning-centered meetings We’ve developed a framework for our meetings. We use it for all our meetings, which means attendees know what to expect. It also saves us from reinventing the wheel in each meeting. These basic steps help our meetings focus on the valuable face-to-face interaction we’re having, and help us truly begin to learn from one another. An agenda for a staff meeting. Use effective meeting basics Prepare for the meeting before the meeting. If you’re running the meeting, prepare a typed agenda and share it before the meeting. Agendas have start times for each item. Start the meeting on time. Don’t wait for stragglers. Define ground rules. Get input from attendees. Recurring meetings don’t have to do this every time. Keep to the meeting agenda. Put off-topic questions and ideas in a parking lot, a visual document that everyone can see, so you can address the questions and ideas later. Finish on time. And if you’ve reached the meeting’s goals, finish early. Parking lots where ideas on sticky notes can be posted for later consideration. Focus to learn Have tech-free meetings: no laptops, no phones, no things with notifications. Bring a notebook and a pen. Take notes by hand. You’re not taking minutes, you’re writing to learn. Come with a learning mindset Ask: what are our goals for this meeting? (Hopefully answered by the meeting agenda.) Ask: what can I learn overall? Ask: what can I learn from each of my colleagues? Ask: what can I share that will help the team learn overall? Ask: what can I share that will help each of my colleagues learn? Investing in regularly scheduled learning-centered meetings At Center Centre, we have two types of recurring all-staff meetings: daily stand-ups and weekly staff meetings. (We are a small organization, so it makes sense to meet as an entire group.) Yes, that means we spend thirty minutes each day in stand-up, for a total of two and a half hours of stand-up meeting time each week. And, yes, we also have a weekly ninety-minute sit-down staff meeting on top of that. This investment in time is an investment in learning. We use these meetings to build our transparency, and, therefore, our trust. The regularity of these meetings helps us maintain ongoing, open sharing about our responsibilities, our successes, and our learning. For instance, we answer five questions in our stand-up: What did I get done since the last stand-up (I reported at)? What is my goal to accomplish before the next stand-up? What’s preventing me from getting these things done, if anything? What’s the highest risk or most unknown thing right now about what I’m trying to get done? What is the most important thing I learned since the last time we met and how will what I learned change the way I approach things in the future? Each person writes out their answers to these questions before the meeting. Each person brings their answers printed on paper to the meeting. And each person brings a pen to jot down notes. Notes compiled for a stand-up meeting. During the stand-up, each person shares their answers to the five questions. To sustain a learning-centered culture, the fifth question is the most important question to answer. It allows individual reflection focused on learning. Sometimes this isn’t an easy question to answer. It makes us stretch. It makes us think. By sharing our individual answers to the fifth question, we open ourselves up to the group. When we honestly share what we’ve learned, we openly admit that we didn’t know something. Sharing like this would be scary (and even risky) if we didn’t have a learning-centered culture. We often share the actual process of how we learned something. By listening, each of us is invited to learn more about the topic at hand, consider what more there is to learn about that topic, and even gain insights into other methods of learning—which can be applied to other topics. Sharing the answers to the fifth question also allows opportunities for further conversations. We often take what someone has individually learned and find ways to apply it for our entire team in support of our organization. We are, after all, learning together. Building individual learners We strive to grow together as a team at Center Centre, but we don’t lose sight of the importance of the individuals who form our team. As individuals, we bring our goals, dreams, abilities, and prior knowledge to the team. To build learning teams, we must build individual learners. A team made up of lifelong learners, who share their learning and learn from each other, is a team that will continually produce better results. As a manager, I need to meet each direct where they are with their current abilities and knowledge. Then, I can help them take their skills and knowledge base to the next levels. This process requires each individual direct to engage in professional development. We believe effective managers help their directs engage in behaviors that support growth and development. Effective managers encourage and support learning. Our weekly one-on-ones One way we encourage learning is through weekly one-on-ones. Each of my directs meets with me, individually, for thirty minutes each week. The meeting is their meeting. It is not my meeting. My direct sets the agenda. They talk about what they want to talk about. They can talk about work. They can talk about things outside of work. They can talk about their health, their kids, and even their cat. Whatever is important to them is important to me. I listen. I take notes. Although the direct sets the specific agenda, the meeting has three main parts. Approximately ten minutes for them (the direct), ten minutes for me (the manager), and ten minutes for us to talk about their future within—and beyond—our organization. Coaching for future performance The final third of our one-on-one is when I coach my directs. Coaching looks to the direct’s future performance. It focuses on developing the direct’s skills. Coaching isn’t hard. It doesn’t take much time. For me, it usually takes less than five minutes a week during a one-on-one. The first time I coach one of my directs, I ask them to brainstorm about the skills they want to improve. They usually already have an idea about this. It’s often something they’ve wanted to work on for some time, but didn’t think they had the time or the knowhow to improve. If a direct doesn’t know what they want to improve, we discuss their job responsibilities—specifically the aspects of the job that concern them. Coaching provides an opportunity for me to ask, “In your job, what are the required skills that you feel like you don’t have (or know well enough, or perform effectively, or use with ease)?” Sometimes I have to remind a direct that it’s okay not to know how to do something (even if it’s a required part of their job). After all, our organization is a learning organization. In a learning organization, no one knows everything but everyone is willing to learn anything. After we review the job responsibilities together, I ask my direct what skill they’d like to work to improve. Whatever they choose, we focus on that skill for coaching—I’ve found my directs work better when they’re internally motivated. Sometimes the first time I talk with a direct about coaching, they get a bit anxious. If this happens, I share a personal story about my professional learning journey. I say something like: I didn’t know how to make a school before we started to make Center Centre. I didn’t know how to manage an entire team of people—day in and day out—until I started managing a team of people every day. When I realized that I was the boss—and that the success of the school would hinge, at least in part, on my skills as a manager—I was a bit terrified. I was missing an important skill set that I needed to know (and I needed to know well). When I first understood this, I felt bad—like I should have already known how to be a great manager. But then I realized, I’d never faced this situation. I’d never needed to know how to use this skill set in this way. I worked through my anxiety about feeling inadequate. I decided I’d better learn how to be an effective manager because the school needed me to be one. You needed me to be one. Every day, I work to improve my management skills. You’ve probably noticed that some days I’m better at it than others. I try not to beat myself up about this, although it’s hard—I’d like to be perfect at it. But I’m not. I know that if I make a conscious, daily effort to learn how to be a better manager, I’ll continue to improve. So that’s what I do. Every day I learn. I learn by doing. I learn how to be better than I was the day before. That’s what I ask of you. Once we determine the skill the direct wants to learn, we figure out how they can go about learning it. I ask: “How could you learn this skill?” We brainstorm for two or three minutes about this. We write down every idea that comes to mind, and we write it so both of us can easily see the options (both whiteboards and sticky notes on the wall work well for this exercise). Read a book. Research online. Watch a virtual seminar. Listen to a podcast. Talk to a mentor. Reach out to an expert. Attend a conference. Shadow someone else while they do the skill. Join a professional organization. The goal is to get the direct on a path of self-development. I’m coaching their development, but I’m not the main way my direct will learn this new skill. I ask my direct which path seems like the best place to start. I let them choose whatever option they want (as long as it works with our budget). They are more likely to follow through if they are in control of this process. Next, we work to break down the selected path into tasks. We only plan one week’s worth of tasks. The tasks are small, and the deadlines are short. My direct reports when each task is completed. At our next one-on-one, I ask my direct about their experience learning this new skill. Rinse. Repeat. That’s it. I spend five minutes a week talking with each direct about their individual learning. They develop their professional skills, and together we’re creating a learning-centered culture. Asking questions I don’t know the answer to When my direct said, “I’ve never worked where people are so honest and transparent about things,” it led me to believe that all this is working. We are building a learning-centered culture. This week I was reminded that creating a learning-centered culture starts not just with the staff, but with me. When I challenge myself to learn and then share what I’m currently learning, my directs want to learn more about what I’m learning about. For example, I decided I needed to improve my writing skills. A few weeks ago, I realized that I was sorely out of practice and I felt like I had lost my voice. So I started to write. I put words on paper. I felt overwhelmed. I felt like I didn’t know how to write anymore (at least not well or effectively). I bought some books on writing (mostly Peter Elbow’s books like Writing with Power, Writing Without Teachers, and Vernacular Eloquence), and I read them. I read them all. Reading these books was part of my personal coaching. I used the same steps to coach myself as I use with my directs when I coach them. In stand-ups, I started sharing what I accomplished (like I completed one of the books) and what I learned by doing—specific things, like engaging in freewriting and an open-ended writing process. This week, I went to lunch with one of my directs. She said, “You’ve been talking about freewriting a lot. You’re really excited about it. Freewriting seems like it’s helping your writing process. Would you tell me more about it?” So I shared the details with her. I shared the reasons why I think freewriting is helping. I’m not focused on perfection. Instead, each day I’m focused on spending ten, uninterrupted minutes writing down whatever comes to my mind. It’s opening my writing mind. It’s allowing my words to flow more freely. And it’s helping me feel less self-conscious about my writing. She said, “Leslie, when you say you’re self-conscious about your writing, I laugh. Not because it’s funny. But because when I read what you write, I think, ‘What is there to improve?’ I think you’re a great writer. It’s interesting to know that you think you can be a better writer. I like learning about your learning process. I think I could do freewriting. I’m going to give it a try.” There’s something magical about all of this. I’m not even sure I can eloquently put it into words. I just know that our working environment is something very different. I’ve never experienced anything quite like it. Somehow, by sharing that I don’t know everything and that I’m always working to learn more, I invite my directs to be really open about what they don’t know. And they see it’s possible always to learn and grow. I’m glad I ignore all the lawyering I’ve learned from watching TV. I’m glad I ask the questions I don’t know the answers to. And I’m glad my directs do the same. When we meet for learning, we accelerate and amplify the learning process—building individual learners and learning teams. Embracing the unknown and working toward understanding is what makes our culture a learning-centered culture. Photos by Summer Kohlhorst. 2014 Leslie Jensen-Inman lesliejenseninman 2014-12-20T00:00:00+00:00 https://24ways.org/2014/meet-for-learning/ process
143 Marking Up a Tag Cloud Everyone’s doing it. The problem is, everyone’s doing it wrong. Harsh words, you might think. But the crimes against decent markup are legion in this area. You see, I’m something of a markup and semantics junkie. So I’m going to analyse some of the more well-known tag clouds on the internet, explain what’s wrong, and then show you one way to do it better. del.icio.us I think the first ever tag cloud I saw was on del.icio.us. Here’s how they mark it up. <div class="alphacloud"> <a href="/tag/.net" class="lb s2">.net</a> <a href="/tag/advertising" class=" s3">advertising</a> <a href="/tag/ajax" class=" s5">ajax</a> ... </div> Unfortunately, that is one of the worst examples of tag cloud markup I have ever seen. The page states that a tag cloud is a list of tags where size reflects popularity. However, despite describing it in this way to the human readers, the page’s author hasn’t described it that way in the markup. It isn’t a list of tags, just a bunch of anchors in a <div>. This is also inaccessible because a screenreader will not pause between adjacent links, and in some configurations will not announce the individual links, but rather all of the tags will be read as just one link containing a whole bunch of words. Markup crime number one. Flickr Ah, Flickr. The darling photo sharing site of the internet, and the biggest blind spot in every standardista’s vision. Forgive it for having atrocious markup and sometimes confusing UI because it’s just so much damn fun to use. Let’s see what they do. <p id="TagCloud">  <a href="/photos/tags/06/" style="font-size: 14px;">06</a>   <a href="/photos/tags/africa/" style="font-size: 12px;">africa</a>   <a href="/photos/tags/amsterdam/" style="font-size: 14px;">amsterdam</a>  ... </p> Again we have a simple collection of anchors like del.icio.us, only this time in a paragraph. But rather than using a class to represent the size of the tag they use an inline style. An inline style using a pixel-based font size. That’s so far away from the goal of separating style from content, they might as well use a <font> tag. You could theoretically parse that to extract the information, but you have more work to guess what the pixel sizes represent. Markup crime number two (and extra jail time for using non-breaking spaces purely for visual spacing purposes.) Technorati Ah, now. Here, you’d expect something decent. After all, the Overlord of microformats and King of Semantics Tantek Çelik works there. Surely we’ll see something decent here? <ol class="heatmap"> <li><em><em><em><em><a href="/tag/Britney+Spears">Britney Spears</a></em></em></em></em></li> <li><em><em><em><em><em><em><em><em><em><a href="/tag/Bush">Bush</a></em></em></em></em></em></em></em></em></em></li> <li><em><em><em><em><em><em><em><em><em><em><em><em><em><a href="/tag/Christmas">Christmas</a></em></em></em></em></em></em></em></em></em></em></em></em></em></li> ... <li><em><em><em><em><em><em><a href="/tag/SEO">SEO</a></em></em></em></em></em></em></li> <li><em><em><em><em><em><em><em><em><em><em><em><em><em><em><em><a href="/tag/Shopping">Shopping</a></em></em></em></em></em></em></em></em></em></em></em></em></em></em></em></li> ... </ol> Unfortunately it turns out not to be that decent, and stop calling me Shirley. It’s not exactly terrible code. It does recognise that a tag cloud is a list of links. And, since they’re in alphabetical order, that it’s an ordered list of links. That’s nice. However … fifteen nested <em> tags? FIFTEEN? That’s emphasis for you. Yes, it is parse-able, but it’s also something of a strange way of looking at emphasis. The HTML spec states that <em> is emphasis, and <strong> is for stronger emphasis. Nesting <em> tags seems counter to the idea that different tags are used for different levels of emphasis. Plus, if you had a screen reader that stressed the voice for emphasis, what would it do? Shout at you? Markup crime number three. So what should it be? As del.icio.us tells us, a tag cloud is a list of tags where the size that they are rendered at contains extra information. However, by hiding the extra context purely within the CSS or the HTML tags used, you are denying that context to some users. The basic assumption being made is that all users will be able to see the difference between font sizes, and this is demonstrably false. A better way to code a tag cloud is to put the context of the cloud within the content, not the markup or CSS alone. As an example, I’m going to take some of my favourite flickr tags and put them into a cloud which communicates the relative frequency of each tag. To start with a tag cloud in its most basic form is just a list of links. I am going to present them in alphabetical order, so I’ll use an ordered list. Into each list item I add the number of photos I have with that particular tag. The tag itself is linked to the page on flickr which contains those photos. So we end up with this first example. To display this as a traditional tag cloud, we need to alter it in a few ways: The items need to be displayed next to each other, rather than one-per-line The context information should be hidden from display (but not from screen readers) The tag should link to the page of items with that tag Displaying the items next to each other simply means setting the display of the list elements to inline. The context can be hidden by wrapping it in a <span> and then using the off-left method to hide it. And the link just means adding an anchor (with rel="tag" for some extra microformats bonus points). So, now we have a simple collection of links in our second example. The last stage is to add the sizes. Since we already have context in our content, the size is purely for visual rendering, so we can just use classes to define the different sizes. For my example, I’ll use a range of class names from not-popular through ultra-popular, in order of smallest to largest, and then use CSS to define different font sizes. If you preferred, you could always use less verbose class names such as size1 through size6. Anyway, adding some classes and CSS gives us our final example, a semantic and more accessible tag cloud. 2006 Mark Norman Francis marknormanfrancis 2006-12-09T00:00:00+00:00 https://24ways.org/2006/marking-up-a-tag-cloud/ code
5 Managing a Mind On 21 May 2013, I woke in a hospital bed feeling exhausted, disorientated and ashamed. The day before, I had tried to kill myself. It’s very hard to write about this and share it. It feels like I’m opening up the deepest recesses of my soul and laying everything bare, but I think it’s important we share this as a community. Since starting tentatively to write about my experience, I’ve had many conversations about this: sharing with others; others sharing with me. I’ve been surprised to discover how many people are suffering similarly, thinking that they’re alone. They’re not. Due to an insane schedule of teaching, writing, speaking, designing and just generally trying to keep up, I reached a point where my buffers completely overflowed. I was working so hard on so many things that I was struggling to maintain control. I was living life on fast-forward and my grasp on everything was slowly slipping. On that day, I reached a low point – the lowest point of my life – and in that moment I could see only one way out. I surrendered. I can’t really describe that moment. I’m still grappling with it. All I know is that I couldn’t take it any more and I gave up. I very nearly died. I’m very fortunate to have survived. I was admitted to hospital, taken there unconscious in an ambulance. On waking, I felt overwhelmed with shame and overcome with remorse, but I was resolved to grasp the situation and address it. The experience has forced me to confront a great deal of issues in my life; it has also encouraged me to seek a deeper understanding of my situation and, in particular, the mechanics of the mind. The relentless pace of change We work in a fast-paced industry: few others, if any, confront the daily challenges we face. The landscape we work within is characterised by constant flux. It’s changing and evolving at a rate we have never experienced before. Few industries reinvent themselves yearly, monthly, weekly… Ours is one of these industries. Technology accelerates at an alarming rate and keeping abreast of this change is challenging, to say the least. As designers it can be difficult to maintain a knowledge bank that is relevant and fit for purpose. We’re on a constant rollercoaster of endless learning, trying to maintain the pace as, daily, new ideas and innovations emerge — in some cases fundamentally changing our medium. Under the pressure of client work or product design and development, it can be difficult to find the time to focus on learning the new skills we need to remain relevant and functionally competent. The result, all too often, is that the edges of our days have eroded. We no longer work nine to five; instead we work eight to six, and after the working day is over we regroup to spend our evenings learning. It’s an unsustainable situation. From the workshop to the web Added to this pressure to keep up, our work is now undertaken under a global gaze, conducted under an ever-present spotlight. Tools like Dribbble, Twitter and others, while incredibly powerful, have an unfortunate side effect, that of unfolding your ideas in public. This shift, from workshop to web, brings with it additional pressure. In the past, the early stages of creativity took place within the relative safety of the workshop, an environment where one could take risks and gather feedback from a trusted few. We had space to make and space to break. No more. Our industry’s focus (and society’s focus) on sharing, leads us now to play out our decisions in public. This shift has changed us culturally, slowly but surely easing every aspect of our process – and lives – from private to public. This is at once liberating and debilitating. If you’re not careful, an addiction to followers, likes, retweets, page views and other forms of measurement can overwhelm you. When you release your work into the wild and all it’s greeted with is silence, it can cripple you. Reflecting on this, in an insightful article titled Derailed, Rogie King asks, “Can social popularity take us off the course of growth and where we were intended to go?” He makes a powerful point, that perhaps we might focus on what really matters, setting aside statistics. He concludes that to grow as practitioners we might be best served by seeking out critique through other avenues, away from the social spotlight. On status anxiety and impostor syndrome Following my experience I embarked on a period of self-reflection. I wanted to discover what had driven me to take the course of action I had. I wanted to ensure it never happened again. I wanted to understand how the mind works and, in so doing, learn a little more about myself. I’ve only begun this journey, but two things I discovered resonated with me: the twin pressures of status anxiety and impostor syndrome. In his excellent book Status Anxiety, the philosopher Alain de Botton explores a growing concern with status anxiety, a worry about how others perceive us and how this shapes our relationship with the world. He states: We all worry about what others think of us. We all long to succeed and fear failure. We all suffer – to a greater or lesser degree, usually privately and with embarrassment – from status anxiety. […] This is an almost universal anxiety that rarely gets mentioned directly: an anxiety about what others think of us; about whether we’re judged a success or a failure, a winner or a loser. We see these pressures played out and amplified in the social sphere we all inhabit. We are social animals and we cannot help but react to the landscape we live and work within. Even if your work receives the public praise you so secretly desire, you find yourself questioning this praise. A psychological phenomenon in which sufferers are unable to internalise their accomplishments, impostor syndrome is far more widespread than you’d imagine. The author Leigh Buchanan describes it as “A fear that one is not as smart or capable as others think.” As she puts it, “People who feel like frauds chalk up their accomplishments to external factors such as luck and timing, or worry they are coasting on charm and personality rather than on talent.” At the bottom, this was all I could see. I felt overwhelmed by others’ perception of me. Was I a success or a failure? Would I be discovered as the fraud I’d convinced myself that I was? These twin pressures – that I was unconscious of at the time – had lead me to a place of crippling self-doubt, questioning my very existence. The act of discovery, of investigating how the mind functions, led me to a deeper understanding of myself. Developing an awareness of psychology and learning about conditions like status anxiety and impostor syndrome helped me to understand and recognise how my mind worked, enabling me to manage it more effectively. Make it count Reflecting upon my experience, I began to regroup, to focus on what really mattered. I’d taken on too much — as I believe many of us do. I was guilty of wanting to do all the things. I started to introduce pauses. Before blindly saying yes to everything, I forced myself to pause and ask: “Is this important?” Our community offers us huge benefits, but an always-on culture in which we’re bombarded daily by opportunity places temptation in our paths. It’s easy to get sucked in to a vortex of wanting to be a part of everything. It’s important, however, to focus. As Simon Collison puts it: I cull and surrender topics. Then I focus on my strengths, mastering my core skills. We only have so much time and we can only do so much. It’s impossible, indeed futile, to try to do everything. Sometimes we need to step back a little and just enjoy life, enjoy others’ achievements, without feeling the need to be actively involved ourselves. As Mahatma Ghandi put it: A ‘no’ uttered from deepest conviction is better and greater than a ‘yes’ merely uttered to please, or what is worse, to avoid trouble. Young India, volume 9, 1927 We need to learn to say no a little more often. We need to focus on the work that matters. This, coupled with an understanding of the mind and how it works, can help us achieve a happier balance between work and life. Don’t waste your time. You only have one life. Make it count. 2013 Christopher Murphy christophermurphy 2013-12-21T00:00:00+00:00 https://24ways.org/2013/managing-a-mind/ process
247 Managing Flow and Rhythm with CSS Custom Properties An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn’t just make your web pages and views look better, but it can also improve the scan-ability. Browsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as <div> and <section> also have no default margin or padding associated with them. I’ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I’m going to show you how it works. Let’s dive in! The Flow utility With the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it’s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid. That’s fine for 90% of contexts, but sometimes, it’s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy. The code .flow { --flow-space: 1em; } .flow > * + * { margin-top: var(--flow-space); } What this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don’t have any idea what surrounds them. The other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let’s look at some examples. A basic example See the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/LXqerj What we’ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there’s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility. Because our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a <h2> in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we’d affect everything because margin values would be calculated as 1.1X the font size. This example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though? See the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/qQgxaY I like lots of whitespace with my article layouts, so the 1em space isn’t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances: h2 { --flow-space: 3rem; } Notice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it’s better for accessibility to use flexible units like em, rem and %, so that a user’s font size preferences are honoured. A more advanced example Although the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space. See the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/ZmPGyL In this example, we’ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article’s content would space itself: <article class="media__content flow"> ... </article> Something to remember though: the custom properties cascade in the same way that other CSS values do, so you’ve got to keep that in mind. We’ve got a great example of that in this example where because we’ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we’ve had to set another value on the .features__list element. “But what about old browsers?”, I hear you cry We’re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element’s font-size: .flow { --flow-space: 1em; } .flow > * + * { margin-top: 1em; margin-top: var(--flow-space); } Thanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them—those that don’t will ignore them. Yay for the cascade and progressive enhancement! Wrapping up This tiny little utility can bring great power for when you want to consistently space elements, vertically. It also—thanks to the power of the modern web—allows us to create contextual overrides without creating modifier classes or shame CSS. If you’ve got other methods of doing this sort of work, please let me know on Twitter. I’d love to see what you’re working on! 2018 Andy Bell andybell 2018-12-07T00:00:00+00:00 https://24ways.org/2018/managing-flow-and-rhythm-with-css-custom-properties/ code
136 Making XML Beautiful Again: Introducing Client-Side XSL Remember that first time you saw XML and got it? When you really understood what was possible and the deep meaning each element could carry? Now when you see XML, it looks ugly, especially when you navigate to a page of XML in a browser. Well, with every modern browser now supporting XSL 1.0, I’m going to show you how you can turn something as simple as an ATOM feed into a customised page using a browser, Notepad and some XSL. What on earth is this XSL? XSL is a family of recommendations for defining XML document transformation and presentation. It consists of three parts: XSLT 1.0 – Extensible Stylesheet Language Transformation, a language for transforming XML XPath 1.0 – XML Path Language, an expression language used by XSLT to access or refer to parts of an XML document. (XPath is also used by the XML Linking specification) XSL-FO 1.0 – Extensible Stylesheet Language Formatting Objects, an XML vocabulary for specifying formatting semantics XSL transformations are usually a one-to-one transformation, but with newer versions (XSL 1.1 and XSL 2.0) its possible to create many-to-many transformations too. So now you have an overview of XSL, on with the show… So what do I need? So to get going you need a browser an supports client-side XSL transformations such as Firefox, Safari, Opera or Internet Explorer. Second, you need a source XML file – for this we’re going to use an ATOM feed from Flickr.com. And lastly, you need an editor of some kind. I find Notepad++ quick for short XSLs, while I tend to use XMLSpy or Oxygen for complex XSL work. Because we’re doing a client-side transformation, we need to modify the XML file to tell it where to find our yet-to-be-written XSL file. Take a look at the source XML file, which originates from my Flickr photos tagged sky, in ATOM format. The top of the ATOM file now has an additional <?xml-stylesheet /> instruction, as can been seen on Line 2 below. This instructs the browser to use the XSL file to transform the document. <?xml version="1.0" encoding="utf-8" standalone="yes"?> <?xml-stylesheet type="text/xsl" href="flickr_transform.xsl"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/"> Your first transformation Your first XSL will look something like this: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/"> <xsl:output method="html" encoding="utf-8"/> </xsl:stylesheet> This is pretty much the starting point for most XSL files. You will notice the standard XML processing instruction at the top of the file (line 1). We then switch into XSL mode using the XSL namespace on all XSL elements (line 2). In this case, we have added namespaces for ATOM (line 4) and Dublin Core (line 5). This means the XSL can now read and understand those elements from the source XML. After we define all the namespaces, we then move onto the xsl:output element (line 6). This enables you to define the final method of output. Here we’re specifying html, but you could equally use XML or Text, for example. The encoding attributes on each element do what they say on the tin. As with all XML, of course, we close every element including the root. The next stage is to add a template, in this case an <xsl:template /> as can be seen below: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/"> <xsl:output method="html" encoding="utf-8"/> <xsl:template match="/"> <html> <head> <title>Making XML beautiful again : Transforming ATOM</title> </head> <body> <xsl:apply-templates select="/atom:feed"/> </body> </html> </xsl:template> </xsl:stylesheet> The beautiful thing about XSL is its English syntax, if you say it out loud it tends to make sense. The / value for the match attribute on line 8 is our first example of XPath syntax. The expression / matches any element – so this <xsl:template/> will match against any element in the document. As the first element in any XML document is the root element, this will be the one matched and processed first. Once we get past our standard start of a HTML document, the only instruction remaining in this <xsl:template/> is to look for and match all <atom:feed/> elements using the <xsl:apply-templates/> in line 14, above. <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/"> <xsl:output method="html" encoding="utf-8"/> <xsl:template match="/"> <xsl:apply-templates select="/atom:feed"/> </xsl:template> <xsl:template match="/atom:feed"> <div id="content"> <h1> <xsl:value-of select="atom:title"/> </h1> <p> <xsl:value-of select="atom:subtitle"/> </p> <ul id="entries"> <xsl:apply-templates select="atom:entry"/> </ul> </div> </xsl:template> </xsl:stylesheet> This new template (line 12, above) matches <feed/> and starts to write the new HTML elements out to the output stream. The <xsl:value-of/> does exactly what you’d expect – it finds the value of the item specifed in its select attribute. With XPath you can select any element or attribute from the source XML. The last part is a repeat of the now familiar <xsl:apply-templates/> from before, but this time we’re using it inside of a called template. Yep, XSL is full of recursion… <xsl:template match="atom:entry"> <li class="entry"> <h2> <a href="{atom:link/@href}"> <xsl:value-of select="atom:title"/> </a> </h2> <p class="date"> (<xsl:value-of select="substring-before(atom:updated,'T')"/>) </p> <p class="content"> <xsl:value-of select="atom:content" disable-output-escaping="yes"/> </p> <xsl:apply-templates select="atom:category"/> </li> </xsl:template> The <xsl:template/> which matches atom:entry (line 1) occurs every time there is a <entry/> element in the source XML file. So in total that is 20 times, this is naturally why XSLT is full of recursion. This <xsl:template/> has been matched and therefore called higher up in the document, so we can start writing list elements directly to the output stream. The first part is simply a <h2/> with a link wrapped within it (lines 3-7). We can select attributes using XPath using @. The second part of this template selects the date, but performs a XPath string function on it. This means that we only get the date and not the time from the string (line 9). This is achieved by getting only the part of the string that exists before the T. Regular Expressions are not part of the XPath 1.0 string functions, although XPath 2.0 does include them. Because of this, in XSL we tend to rely heavily on the available XML output. The third part of the template (line 12) is a <xsl:value-of/> again, but this time we use an attribute of <xsl:value-of/> called disable output escaping to turn escaped characters back into XML. The very last section is another <xsl:apply-template/> call, taking us three templates deep. Do not worry, it is not uncommon to write XSL which go 20 or more templates deep! <xsl:template match="atom:category"> <xsl:for-each select="."> <xsl:element name="a"> <xsl:attribute name="rel"> <xsl:text>tag</xsl:text> </xsl:attribute> <xsl:attribute name="href"> <xsl:value-of select="concat(@scheme, @term)"/> </xsl:attribute> <xsl:value-of select="@term"/> </xsl:element> <xsl:text> </xsl:text> </xsl:for-each> </xsl:template> In our final <xsl:template/>, we see a combination of what we have done before with a couple of twists. Once we match atom:category we then count how many elements there are at that same level (line 2). The XPath . means ‘self’, so we count how many category elements are within the <entry/> element. Following that, we start to output a link with a rel attribute of the predefined text, tag (lines 4-6). In XSL you can just type text, but results can end up with strange whitespace if you do (although there are ways to simply remove all whitespace). The only new XPath function in this example is concat(), which simply combines what XPaths or text there might be in the brackets. We end the output for this tag with an actual tag name (line 10) and we add a space afterwards (line 12) so it won’t touch the next tag. (There are better ways to do this in XSL using the last() XPath function). After that, we go back to the <xsl:for-each/> element again if there is another category element, otherwise we end the <xsl:for-each/> loop and end this <xsl:template/>. A touch of style Because we’re using recursion through our templates, you will find this is the end of the templates and the rest of the XML will be ignored by the parser. Finally, we can add our CSS to finish up. (I have created one for Flickr and another for News feeds) <style type="text/css" media="screen">@import "flickr_overview.css?v=001";</style> So we end up with a nice simple to understand but also quick to write XSL which can be used on ATOM Flickr feeds and ATOM News feeds. With a little playing around with XSL, you can make XML beautiful again. All the files can be found in the zip file (14k) 2006 Ian Forrester ianforrester 2006-12-07T00:00:00+00:00 https://24ways.org/2006/beautiful-xml-with-xsl/ code
30 Making Sites More Responsive, Responsibly With digital projects we’re used to shifting our thinking to align with our target audience. We may undertake research, create personas, identify key tasks, or observe usage patterns, with our findings helping to refine our ongoing creations. A product’s overall experience can make or break its success, and when it comes to defining these experiences our development choices play a huge role alongside more traditional user-focused activities. The popularisation of responsive web design is a great example of how we are able to shape the web’s direction through using technology to provide better experiences. If we think back to the move from table-based layouts to CSS, initially our clients often didn’t know or care about the difference in these approaches, but we did. Responsive design was similar in this respect – momentum grew through the web industry choosing to use an approach that we felt would give a better experience, and which was more future-friendly.  We tend to think of responsive design as a means of displaying content appropriately across a range of devices, but the technology and our implementation of it can facilitate much more. A responsive layout not only helps your content work when the newest smartphone comes out, but it also ensures your layout suitably adapts if a visually impaired user drastically changes the size of the text. The 24 ways site at 400% on a Retina MacBook Pro displays a layout more typically used for small screens. When we think more broadly, we realise that our technical choices and approaches to implementation can have knock-on effects for the greater good, and beyond our initial target audiences. We can make our experiences more responsive to people’s needs, enhancing their usability and accessibility along the way. Being responsibly responsive Of course, when we think about being more responsive, there’s a fine line between creating useful functionality and becoming intrusive and overly complex. In the excellent Responsible Responsive Design, Scott Jehl states that: A responsible responsive design equally considers the following throughout a project: Usability: The way a website’s user interface is presented to the user, and how that UI responds to browsing conditions and user interactions. Access: The ability for users of all devices, browsers, and assistive technologies to access and understand a site’s features and content. Sustainability: The ability for the technology driving a site or application to work for devices that exist today and to continue to be usable and accessible to users, devices, and browsers in the future. Performance: The speed at which a site’s features and content are perceived to be delivered to the user and the efficiency with which they operate within the user interface. Scott’s book covers these ideas in a lot more detail than I’ll be able to here (put it on your Christmas list if it’s not there already), but for now let’s think a bit more about our roles as digital creators and the power this gives us. Our choices around technology and the decisions we have to make can be extremely wide-ranging. Solutions will vary hugely depending on the needs of each project, though we can further explore the concept of making our creations more responsive through the use of humble web technologies. The power of the web We all know that under the HTML5 umbrella are some great new capabilities, including a number of JavaScript APIs such as geolocation, web audio, the file API and many more. We often use these to enhance the functionality of our sites and apps, to add in new features, or to facilitate device-specific interactions. You’ll have seen articles with flashy titles such as “Top 5 JavaScript APIs You’ve Never Heard Of!”, which you’ll probably read, think “That’s quite cool”, yet never use in any real work. There is great potential for technologies like these to be misused, but there are also great prospects for them to be used well to enhance experiences. Let’s have a look at a few examples you may not have considered. Offline first When we make websites, many of us follow a process which involves user stories – standardised snippets of context explaining who needs what, and why. “As a student I want to pay online for my course so I don’t have to visit the college in person.” “As a retailer I want to generate unique product codes so I can manage my stock.” We very often focus heavily on what needs doing, but may not consider carefully how it will be done. As in Scott’s list, accessibility is extremely important, not only in terms of providing a great experience to users of assistive technologies, but also to make your creation more accessible in the general sense – including under different conditions. Offline first is yet another ‘first’ methodology (my personal favourite being ‘tea first’), which encourages us to develop so that connectivity itself is an enhancement – letting users continue with tasks even when they’re offline. Despite the rapid growth in public Wi-Fi, if we consider data costs and connectivity in developing countries, our travel habits with planes, underground trains and roaming (or simply if you live in the UK’s signal-barren East Anglian wilderness as I do), then you’ll realise that connectivity isn’t as ubiquitous as our internet-addled brains would make us believe. Take a scenario that I’m sure we’re all familiar with – the digital conference. Your venue may be in a city served by high-speed networks, but after overloading capacity with a full house of hashtag-hungry attendees, each carrying several devices, then everyone’s likely to be offline after all. Wouldn’t it be better if we could do something like this instead? Someone visits our conference website. On this initial run, some assets may be cached for future use: the conference schedule, the site’s CSS, photos of the speakers. When the attendee revisits the site on the day, the page shell loads up from the cache. If we have cached content (our session timetable, speaker photos or anything else), we can load it directly from the cache. We might then try to update this, or get some new content from the internet, but the conference attendee already has a base experience to use. If we don’t have something cached already, then we can try grabbing it online. If for any reason our requests for new content fail (we’re offline), then we can display a pre-cached error message from the initial load, perhaps providing our users with alternative suggestions from what is cached. There are a number of ways we can make something like this, including using the application cache (AppCache) if you’re that way inclined. However, you may want to look into service workers instead. There are also some great resources on Offline First! if you’d like to find out more about this. Building in offline functionality isn’t necessarily about starting offline first, and it’s also perfectly possible to retrofit sites and apps to catch offline scenarios, but this kind of graceful degradation can end up being more complex than if we’d considered it from the start. By treating connectivity as an enhancement, we can improve the experience and provide better performance than we can when waiting to counter failures. Our websites can respond to connectivity and usage scenarios, on top of adapting how we present our content. Thinking in this way can enhance each point in Scott’s criteria. As I mentioned, this isn’t necessarily the kind of development choice that our clients will ask us for, but it’s one we may decide is simply the right way to build based on our project, enhancing the experience we provide to people, and making it more responsive to their situation. Even more accessible We’ve looked at accessibility in terms of broadening when we can interact with a website, but what about how? Our user stories and personas are often of limited use. We refer in very general terms to students, retailers, and sometimes just users. What if we have a student whose needs are very different from another student? Can we make our sites even more usable and accessible through our development choices? Again using JavaScript to illustrate this concept, we can do a lot more with the ways people interact with our websites, and with the feedback we provide, than simply accepting keyboard, mouse and touch inputs and displaying output on a screen. Input Ambient light detection is one of those features that looks great in simple demos, but which we struggle to put to practical use. It’s not new – many satnav systems automatically change the contrast for driving at night or in tunnels, and our laptops may alter the screen brightness or keyboard backlighting to better adapt to our surroundings. Using web technologies we can adapt our presentation to be better suited to ambient light levels. If our device has an appropriate light sensor and runs a browser that supports the API, we can grab the ambient light in units using ambient light events, in JavaScript. We may then change our presentation based on different bandings, perhaps like this: window.addEventListener('devicelight', function(e) { var lux = e.value; if (lux < 50) { //Change things for dim light } if (lux >= 50 && lux <= 10000) { //Change things for normal light } if (lux > 10000) { //Change things for bright light } }); Live demo (requires light sensor and supported browser). Soon we may also be able to do such detection through CSS, with light-level being cited in the Media Queries Level 4 specification. If that becomes the case, it’ll probably look something like this: @media (light-level: dim) { /*Change things for dim light*/ } @media (light-level: normal) { /*Change things for normal light*/ } @media (light-level: washed) { /*Change things for bright light*/ } While we may be quick to dismiss this kind of detection as being a gimmick, it’s important to consider that apps such as Light Detector, listed on Apple’s accessibility page, provide important context around exactly this functionality. “If you are blind, Light Detector helps you to be more independent in many daily activities. At home, point your iPhone towards the ceiling to understand where the light fixtures are and whether they are switched on. In a room, move the device along the wall to check if there is a window and where it is. You can find out whether the shades are drawn by moving the device up and down.” everywaretechnologies.com/apps/lightdetector Input can be about so much more than what we enter through keyboards. Both an ever increasing amount of available sensors and more APIs being supported by the major browsers will allow us to cater for more scenarios and respond to them accordingly. This can be as complex or simple as you need; for instance, while x-webkit-speech has been deprecated, the web speech API is available for a number of browsers, and research into sign language detection is also being performed by organisations such as Microsoft. Output Web technologies give us some great enhancements around input, allowing us to adapt our experiences accordingly. They also provide us with some nice ways to provide feedback to users. When we play video games, many of our modern consoles come with the ability to have rumble effects on our controller pads. These are a great example of an enhancement, as they provide a level of feedback that is entirely optional, but which can give a great deal of extra information to the player in the right circumstances, and broaden the scope of our comprehension beyond what we’re seeing and hearing. Haptic feedback is possible on the web as well. We could use this in any number of responsible applications, such as alerting a user to changes or using different patterns as a communication mechanism. If you find yourself in a pickle, here’s how to print out SOS in Morse code through the vibration API. The following code indicates the length of vibration in milliseconds, interspersed by pauses in milliseconds. navigator.vibrate([100, 300, 100, 300, 100, 300, 600, 300, 600, 300, 600, 300, 100, 300, 100, 300, 100]); Live demo (requires supported browser) With great power… What you’ve no doubt come to realise by now is that these are just more examples of progressive enhancement, whose inclusion will provide a better experience if the capabilities are available, but which we should not rely on. This idea isn’t new, but the most important thing to remember, and what I would like you to take away from this article, is that it is up to us to decide to include these kind of approaches within our projects – if we don’t root for them, they probably won’t happen. This is where our professional responsibility comes in. We won’t necessarily be asked to implement solutions for the scenarios above, but they illustrate how we can help to push the boundaries of experiences. Maybe we’ll have to switch our thinking about how we build, but we can create more usable products for a diverse range of people and usage scenarios through the choices we make around technology. Let’s stop thinking simply in terms of features inside a narrow view of our target users, and work out how we can extend these to cater for a wider set of situations. When you plan your next digital project, consider the power of the web and the enhancements we can use, and try to make your projects even more responsive and responsible. 2014 Sally Jenkinson sallyjenkinson 2014-12-10T00:00:00+00:00 https://24ways.org/2014/making-sites-more-responsive-responsibly/ code
97 Making Modular Layout Systems For all of the advantages the web has with distribution of content, I’ve always lamented the handiness of the WYSIWYG design tools from the print publishing world. When I set out to redesign my personal website, I wanted to have some of the same abilities that those tools have, laying out pages how I saw fit, and that meant a flexible system for dealing with imagery. Building on some of the CSS that Eric Meyer employed a few years back on the A List Apart design, I created a set of classes to use together to achieve the variety I was after. Employing multiple classes isn’t a new technique, but most examples aren’t coming at this from strictly editorial and visual perspectives; I wanted to have options to vary my layouts depending on content. If you want to skip ahead, you can view the example first. Laying the Foundation We need to be able to map out our page so that we have predictable canvas, and then create a system of image sizes that work with it. For the sake of this article, let’s use a simple uniform 7-column grid, consisting of seven 100px-wide columns and 10px of space between each column, though you can use any measurements you want as long as they remain constant. All of our images will have a width that references the grid column widths (in our example, 100px, 210px, 320px, 430px, 540px, 650px, or 760px), but the height can be as large as needed. Once we know our images will all have one of those widths, we can setup our CSS to deal with the variations in layout. In the most basic form, we’re going to be dealing with three classes: one each that represent an identifier, a size, and a placement for our elements. This is really a process of abstracting the important qualities of what you would do with a given image in a layout into separate classes, allowing you to quickly customize their appearance by combining the appropriate classes. Rather than trying to serve up a one-size-fits-all approach to styling, we give each class only one or two attributes and rely on the combination of classes to get us there. Identifier This specifies what kind of element we have: usually either an image (pic) or some piece of text (caption). Size Since we know how our grid is constructed and the potential widths of our images, we can knock out a space equal to the width of any number of columns. In our example, that value can be one, two, three, four, five, six, or seven. Placement This tells the element where to go. In our example we can use a class of left or right, which sets the appropriate floating rule. Additions I created a few additions that be tacked on after the “placement” in the class stack: solo, for a bit more space beneath images without captions, frame for images that need a border, and inset for an element that appears inside of a block of text. Outset images are my default, but you could easily switch the default concept to use inset images and create a class of outset to pull them out of the content columns. The CSS /* I D E N T I F I E R */ .pic p, .caption { font-size: 11px; line-height: 16px; font-family: Verdana, Arial, sans-serif; color: #666; margin: 4px 0 10px; } /* P L A C E M E N T */ .left {float: left; margin-right: 20px;} .right {float: right; margin-left: 20px;} .right.inset {margin: 0 120px 0 20px;} /* img floated right within text */ .left.inset {margin-left: 230px;} /* img floated left within text */ /* S I Z E */ .one {width: 100px;} .two {width: 210px;} .three {width: 320px;} .four {width: 430px;} .five {width: 540px;} .six {width: 650px;} .seven {width: 760px;} .eight {width: 870px;} /* A D D I T I O N S */ .frame {border: 1px solid #999;} .solo img {margin-bottom: 20px;} In Use You can already see how powerful this approach can be. If you want an image and a caption on the left to stretch over half of the page, you would use: <div class="pic four left"> <img src="image.jpg" /> <p>Caption goes here.</p> </div> Or, for that same image with a border and no caption: <img src="image.jpg" class="pic four left frame solo"/> You just tack on the classes that contain the qualities you need. And because we’ve kept each class so simple, we can apply these same stylings to other elements too: <p class="caption two left">Caption goes here.</p> Caveats Obviously there are some potential semantic hang-ups with these methods. While classes like pic and caption stem the tide a bit, others like left and right are tougher to justify. This is something that you have to decide for yourself; I’m fine with the occasional four or left class because I think there’s a good tradeoff. Just as a fully semantic solution to this problem would likely be imperfect, this solution is imperfect from the other side of the semantic fence. Additionally, IE6 doesn’t understand the chain of classes within a CSS selector (like .right.inset). If you need to support IE6, you may have to write a few more CSS rules to accommodate any discrepancies. Opportunities This is clearly a simple example, but starting with a modular foundation like this leaves the door open for opportunity. We’ve created a highly flexible and human-readable system for layout manipulation. Obviously, this is something that would need to be tailored to the spacing and sizes of your site, but the systematic approach is very powerful, especially for editorial websites whose articles might have lots of images of varying sizes. It may not get us fully to the flexibility of WYSIWYG print layouts, but methods like this point us in a direction of designs that can adapt to the needs of the content. View the example: without grid and with grid. 2008 Jason Santa Maria jasonsantamaria 2008-12-15T00:00:00+00:00 https://24ways.org/2008/making-modular-layout-systems/ process
50 Make a Comic For something slightly different over Christmas, why not step away from your computer and make a comic? Definitely not the author working on a comic in the studio, with the desk displaying some of the things you need to make a comic on paper. Why make a comic? First of all, it’s truly fun and it’s not that difficult. If you’re a designer, you can use skills you already have, so why not take some time to indulge your aesthetic whims and make something for yourself, rather than for a client or your company. And you can use a computer – or not. If you’re an interaction designer, it’s likely you’ve already made a storyboard or flow, or designed some characters for personas. This is a wee jump away from that, to the realm of storytelling and navigating human emotions through characters who may or may not be human. Similar medium and skills, different content. It’s not a client deliverable but something that stands by itself, and you’ve nobody’s criteria to meet except those that exist in your imagination! Thanks to your brain and the alchemy of comics, you can put nearly anything in a sequence and your brain will find a way to make sense of it. Scott McCloud wrote about the non sequitur in comics: “There is a kind of alchemy at work in the space between panels which can help us find meaning or resonance in even the most jarring of combinations.” Here’s an example of a non sequitur from Scott McCloud’s Understanding Comics – the images bear no relation to one another, but since they’re in a sequence our brains do their best to understand it: Once you know this it takes the pressure off somewhat. It’s a fun thing to keep in mind and experiment with in your comics! Materials needed A4 copy/printing paper HB pencil for light drawing Dip pen and waterproof Indian ink Bristol board (or any good quality card with a smooth, durable surface) Step 1: Get ideas You’d be surprised where you can take a small grain of an idea and develop it into an interesting comic. Think about a funny conversation you had, or any irrational fears, habits, dreams or anything else. Just start writing and drawing. Having ideas is hard, I know, but you will get some ideas when you start working. One way to keep track of ideas is to keep a sketch diary, capturing funny conversations and other events you could use in comics later. You might want to just sketch out the whole comic very roughly if that helps. I tend to sketch the story first, but it usually changes drastically during step 2. Step 2: Edit your story using thumbnails How thumbnailing works. Why use thumbnails? You can move them around or get rid of them! Drawings are harder and much slower to edit than words, so you need to draw something very quick and very rough. You don’t have to care about drawing quality at this point. You might already have a drafted comic from the previous step; now you can split each panel up into a thumbnail like the image above. Get an A4 sheet of printing paper and tear it up into squares. A thumbnail equals a comic panel. Start drawing one panel per thumbnail. This way you can move scenes and parts of the story around as you work on the pacing. It’s an extremely useful tip if you want to expand a moment in time or draw out a dialogue, or if you want to just completely cut scenes. Step 3: Plan a layout So you’ve got the story more or less down: you now need to know how they’ll look on the page. Sketch a layout and arrange the thumbnails into the layout. The simplest way to do this is to divide an A4 page into equal panels — say, nine. But if you want, you can be more creative than that. The Gigantic Beard That Was Evil by Stephen Collins is an excellent example of the scope for using page layout creatively. You can really push the form: play with layout, scale, story and what you think of as a comic. Step 4: Draw the comic I recommend drawing on A4 Bristol board paper since it has a smooth surface, can tolerate a lot of rubbing out and holds ink well. You can get it from any art shop. Using your thumbnails for reference, draw the comic lightly using an HB pencil. Don’t make the line so heavy that it can’t be erased (since you’ll ink over the lines later). Step 5: Ink the comic Image before colour was added. You’ve drawn your story. Well done! Now for the fun part. I recommend using a dip pen and some waterproof ink. Why waterproof? If you want, you can add an ink wash later, or even paint it. If you don’t have a dip pen, you could also use any quality pen. Carefully go over your pencilled lines with the pen, working from top left to right and down, to avoid smudging it. It’s unfortunately easy to smudge the ink from the dip pen, so I recommend practising first. You’ve made a comic! Step 6: Adding colour Comics traditionally had a limited colour palette before computers (here’s an in-depth explanation if you’re curious). You can actually do a huge amount with a restricted colour palette. Ellice Weaver’s comics show how very nicely how you can paint your work using a restricted palette. So for the next step, resist the temptation to add ALL THE COLOURS and consider using a limited palette. Once the ink is completely dry, erase the pencilled lines and you’ll be left with a beautiful inked black and white drawing. You could use a computer for this part. You could also photocopy it and paint straight on the copy. If you’re feeling really brave, you could paint straight on the original. But I’d suggest not doing this if it’s your first try at painting! What follows is an extremely basic guide for painting using Photoshop, but there are hundreds of brilliant articles out there and different techniques for digital painting. How to paint your comic using Photoshop Scan the drawing and open it in Photoshop. You can adjust the levels (Image → Adjustments → Levels) to make the lines darker and crisper, and the paper invisible. At this stage, you can erase any smudges or mistakes. With a Wacom tablet, you could even completely redraw parts! Computers are just amazing. Keep the line art as its own layer. Add a new layer on top of the lines, and set the layer state from normal to multiply. This means you can paint your comic without obscuring your lines. Rename the layer something else, so you can keep track. Start blocking in colour. And once you’re happy with that, experiment with adding tone and texture. Christmas comic challenge! Why not challenge yourself to make a short comic over Christmas? If you make one, share it in the comments. Or show me on Twitter — I’d love to see it. Credit: Many of these techniques were learned on the Royal Drawing School’s brilliant ‘Drawing the Graphic Novel’ course. 2015 Rebecca Cottrell rebeccacottrell 2015-12-20T00:00:00+00:00 https://24ways.org/2015/make-a-comic/ design
185 Make Your Mockup in Markup We aren’t designing copies of web pages, we’re designing web pages. Andy Clarke, via Quotes on Design The old way I used to think the best place to design a website was in an image editor. I’d create a pixel-perfect PSD filled with generic content, send it off to the client, go through several rounds of revisions, and eventually create the markup. Does this process sound familiar? You’re not alone. In a very scientific and official survey I conducted, close to 90% of respondents said they design in Photoshop before the browser. That process is whack, yo! Recently, thanks in large part to the influence of design hero Dan Cederholm, I’ve come to the conclusion that a website’s design should begin where it’s going to live: in the browser. Die Photoshop, die Some of you may be wondering, “what’s so bad about using Photoshop for the bulk of my design?” Well, any seasoned designer will tell you that working in Photoshop is akin to working in a minefield: you never know when it’s going to blow up in your face. The application Adobe Photoshop CS4 has unexpectedly ruined your day. Photoshop’s propensity to crash at crucial moments is a running joke in the industry, as is its barely usable interface. And don’t even get me started on the hot, steaming pile of crap that is text rendering. Text rendered in Photoshop (left) versus Safari (right). Crashing and text rendering issues suck, but we’ve learned to live with them. The real issue with using Photoshop for mockups is the expectations you’re setting for a client. When you send the client a static image of the design, you’re not giving them the whole picture — they can’t see how a fluid grid would function, how the design will look in a variety of browsers, basic interactions like :hover effects, or JavaScript behaviors. For more on the disadvantages to showing clients designs as images rather than websites, check out Andy Clarke’s Time to stop showing clients static design visuals. A necessary evil? In the past we’ve put up with Photoshop because it was vital to achieving our beloved rounded corners, drop shadows, outer glows, and gradients. However, with the recent adaptation of CSS3 in major browsers, and the slow, joyous death of IE6, browsers can render mockups that are just as beautiful as those created in an image editor. With the power of RGBA, text-shadow, box-shadow, border-radius, transparent PNGs, and @font-face combined, you can create a prototype that radiates shiny awesomeness right in the browser. If you can see this epic article through to the end, I’ll show you step by step how to create a gorgeous mockup using mostly markup. Get started by getting naked Content precedes design. Design in the absence of content is not design, it’s decoration. Jeffrey Zeldman In the beginning, don’t even think about style. Instead, start with the foundation: the content. Lay the groundwork for your markup order, and ensure that your design will be useable with styles and images turned off. This is great for prioritizing the content, and puts you on the right path for accessibility and search engine optimization. Not a bad place to start, amirite? An example of unstyled content, in all its naked glory. View it large. Flush out the layout The next step is to structure the content in a usable way. With CSS, making basic layout changes is as easy as switching up a float, so experiment to see what structure suits the content best. The mockup with basic layout work done. Got your grids covered There are a variety of tools that allow you to layer a grid over your browser window. For Mac users I recommend using Slammer, and PC users can check out one of the bookmarklets that are available, such as 960 Gridder. The mockup with a grid applied using Slammer. Once you’ve found a layout that works well for the content, pass it along to the client for review. This keeps them involved in the design process, and gives them an idea of how the site will be structured when it’s live. Start your styling Now for the fun part: begin applying the presentation layer. Let usability considerations drive your decisions about color and typography; use highlighted colors and contrasting typefaces on elements you wish to emphasize. RGBA? More like RGByay! Introducing color is easy with RGBA. I like to start with the page’s main color, then use white at varying opacities to empasize content sections. In the example mockup the body background is set to rgba(203,111,21), the content containers are set to rgba(255,255,255,0.7), and a few elements are highlighted with rgba(255,255,255,0.1) If you’re not sure how RGBA works, check out Drew McLellan’s super helpful 24ways article. Laying down type Just like with color, you can use typography to evoke a feeling and direct a user’s attention. Have contrasting typefaces (like serif headlines and sans-serif body text) to group the content into meaningful sections. In a recent A List Apart article, the Master of Web Typography™ Jason Santa Maria offers excellent advice on how to choose your typefaces: Write down a general description of the qualities of the message you are trying to convey, and then look for typefaces that embody those qualities. Sounds pretty straightforward. I wanted to give my design a classic feel with a hint of nostalgia, so I used Georgia for the headlines, and incorporated the ornate ampersand from Baskerville into the header. A closeup on the site’s header. Let’s get sexy The design doesn’t look too bad as it is, but it’s still pretty flat. We can do better, and after mixing in some CSS3 and a couple of PNGs, it’s going to get downright steamy in here. Give it some glow Objects in the natural world reflect light, so to make your design feel tangible and organic, give it some glow. In the example design I achieved this by creating two white to transparent gradients of varying opacities. Both have a solid white border across their top, which gives edges a double border effect and makes them look sharper. Using CSS3’s text-shadow on headlines and box-shadow on content modules is another quick way to add depth. A wide and closeup view of the design with gradients, text-shadow and box-shadow added. For information on how to implement text-shadow and box-shadow using RGBA, check out the article I wrote on it last week. 37 pieces of flair Okay, maybe you don’t need that much flair, but it couldn’t hurt to add a little; it’s the details that will set your design apart. Work in imagery and texture, using PNGs with an alpha channel so you can layer images and still tweak the color later on. The design with grungy textures, a noisy diagonal stripe pattern, and some old transportation images layered behind the text. Because the colors are rendered using RGBA, these images bleed through the content, giving the design a layered feel. Best viewed large. Send it off Hey, look at that. You’ve got a detailed, well structured mockup for the client to review. Best of all, your markup is complete too. If the client approves the design at this stage, your template is practically finished. Bust out the party hats! Not so fast, Buster! So I don’t know about you, but I’ve never gotten a design past the client’s keen eye for criticism on the first go. Let’s review some hypothetical feedback (none of which is too outlandish, in my experience), and see how we’d make the requested changes in the browser. Updating the typography My ex-girlfriend loved Georgia, so I never want to see it again. Can we get rid of it? I want to use a font that’s chunky and loud, just like my stupid ex-girlfriend. Fakey McClient Yikes! Thankfully with CSS, removing Georgia is as easy as running a find and replace on the stylesheet. In my revised mockup, I used @font-face and League Gothic on the headlines to give the typography the, um, unique feel the client is looking for. The same mockup, using @font-face on the headlines. If you’re unfamiliar with implementing @font-face, check out Nice Web Type‘s helpful article. Adding rounded corners I’m not sure if I’ll like it, but I want to see what it’d look like with rounded corners. My cousin, a Web 2.0 marketing guru, says they’re trendy right now. Fakey McClient Switching to rounded corners is a nightmare if you’re doing your mockup in Photoshop, since it means recreating most of the shapes and UI elements in the design. Thankfully, with CSS border-radius comes to our rescue! By applying this gem of a style to a handful of classes, you’ll be rounded out in no time. The mockup with rounded corners, created using border-radius. If you’re not sure how to implement border-radius, check out CSS3.info‘s quick how-to. Making changes to the color The design is too dark, it’s depressing! They call it ‘the blues’ for a reason, dummy. Can you try using a brighter color? I want orange, like Zeldman uses. Fakey McClient Making color changes is another groan-inducing task when working in Photoshop. Finding and updating every background layer, every drop shadow, and every link can take forever in a complex PSD. However, if you’ve done your mockup in markup with RGBA and semi-transparent PNGs, making changes to your color is as easy as updating the body background and a few font colors. The mockup with an orange color scheme. Best viewed large. Ahem, what about Internet Explorer? Gee, thanks for reminding me, buzzkill. Several of the CSS features I’ve suggested you use, such as RGBA, text-shadow and box-shadow, and border-radius, are not supported in Internet Explorer. I know, it makes me sad too. However, this doesn’t mean you can’t try these techniques out in your markup based mockups. The point here is to get your mockups done as efficiently as possible, and to keep the emphasis on markup from the very beginning. Once the design is approved, you and the client have to decide if you can live with the design looking different in different browsers. Is it so bad if some users get to see drop shadows and some don’t? Or if the rounded corners are missing for a portion of your audience? The design won’t be broken for IE people, they’re just missing out on a few visual treats that other users will see. The idea of rewarding users who choose modern browsers is not a new concept; Dan covers it thoroughly in Handcrafted CSS, and it’s been written about in the past by Aaron Gustafson and Andy Clarke on several occasions. I believe we shouldn’t have to design for the lowest common denominator (cough, IE6 users, cough); instead we should create designs that are beautiful in modern browsers, but still degrade nicely for the other guy. However, some clients just aren’t that progressive, and in that case you can always use background images for drop shadows and rounded corners, as you have in the past. Closing thoughts With the advent of CSS3, browsers are just as capable of giving us beautiful, detailed mockups as Photoshop, and in half the time. I’m not the only one to make an argument for this revised process; in his article Time to stop showing clients static design visuals, and in his presentation Walls Come Tumbling Down, Andy Clarke makes a fantastic case for creating your mockups in markup. So I guess my challenge to you for 2010 is to get out of Photoshop and into the code. Even if the arguments for designing in the browser aren’t enough to make you change your process permanently, it’s worthwhile to give it a try. Look at the New Year as a time to experiment; applying constraints and evaluating old processes can do wonders for improving your efficiency and creativity. 2009 Meagan Fisher meaganfisher 2009-12-24T00:00:00+00:00 https://24ways.org/2009/make-your-mockup-in-markup/ process
20 Make Your Browser Dance It was a crisp winter’s evening when I pulled up alongside the pier. I stepped out of my car and the bitterly cold sea air hit my face. I walked around to the boot, opened it and heaved out a heavy flight case. I slammed the boot shut, locked the car and started walking towards the venue. This was it. My first gig. I thought about all those weeks of preparation: editing video clips, creating 3-D objects, making coloured patterns, then importing them all into software and configuring effects to change as the music did; targeting frequency, beat, velocity, modifying size, colour, starting point; creating playlists of these… and working out ways to mix them as the music played. This was it. This was me VJing. This was all a lifetime (well a decade!) ago. When I started web designing, VJing took a back seat. I was more interested in interactive layouts, semantic accessible HTML, learning all the IE bugs and mastering the quirks that CSS has to offer. More recently, I have been excited by background gradients, 3-D transforms, the @keyframe directive, as well as new APIs such as getUserMedia, indexedDB, the Web Audio API But wait, have I just come full circle? Could it be possible, with these wonderful new things in technologies I am already familiar with, that I could VJ again, right here, in a browser? Well, there’s only one thing to do: let’s try it! Let’s take to the dance floor Over the past couple of years working in The Lab I have learned to take a much more iterative approach to projects than before. One of my new favourite methods of working is to create a proof of concept to make sure my theory is feasible, before going on to create a full-blown product. So let’s take the same approach here. The main VJing functionality I want to recreate is manipulating visuals in relation to sound. So for my POC I need to create a visual, with parameters that can be changed, then get some sound and see if I can analyse that sound to detect some data, which I can then use to manipulate the visual parameters. Easy, right? So, let’s start at the beginning: creating a simple visual. For this I’m going to create a CSS animation. It’s just a funky i element with the opacity being changed to make it flash. See the Pen Creating a light by Rumyra (@Rumyra) on CodePen A note about prefixes: I’ve left them out of the code examples in this post to make them easier to read. Please be aware that you may need them. I find a great resource to find out if you do is caniuse.com. You can also check out all the code for the examples in this article Start the music Well, that’s pretty easy so far. Next up: loading in some sound. For this we’ll use the Web Audio API. The Web Audio API is based around the concept of nodes. You have a source node: the sound you are loading in; a destination node: usually the device’s speakers; and any number of processing nodes in between. All this processing that goes on with the audio is sandboxed within the AudioContext. So, let’s start by initialising our audio context. var contextClass = window.AudioContext; if (contextClass) { //web audio api available. var audioContext = new contextClass(); } else { //web audio api unavailable //warn user to upgrade/change browser } Now let’s load our sound file into the new context we created with an XMLHttpRequest. function loadSound() { //set audio file url var audioFileUrl = '/octave.ogg'; //create new request var request = new XMLHttpRequest(); request.open("GET", audioFileUrl, true); request.responseType = "arraybuffer"; request.onload = function() { //take from http request and decode into buffer context.decodeAudioData(request.response, function(buffer) { audioBuffer = buffer; }); } request.send(); } Phew! Now we’ve loaded in some sound! There are plenty of things we can do with the Web Audio API: increase volume; add filters; spatialisation. If you want to dig deeper, the O’Reilly Web Audio API book by Boris Smus is available to read online free. All we really want to do for this proof of concept, however, is analyse the sound data. To do this we really need to know what data we have. Learning the steps Let’s take a minute to step back and remember our school days and science class. I’m sure if I drew a picture of a sound wave, we would all start nodding our heads. The sound you hear is caused by pressure differences in the particles in the air. Sound pushes these particles together, causing vibrations. Amplitude is basically strength of pressure. A simple example of change of amplitude is when you increase the volume on your stereo and the output wave increases in size. This is great when everything is analogue, but the waveform varies continuously and it’s not suitable for digital processing: there’s an infinite set of values. For digital processing, we need discrete numbers. We have to sample the waveform at set time intervals, and record data such as amplitude and frequency. Luckily for us, just the fact we have a digital sound file means all this hard work is done for us. What we’re doing in the code above is piping that data in the audio context. All we need to do now is access it. We can do this with the Web Audio API’s analysing functionality. Just pop in an analysing node before we connect the source to its destination node. function createAnalyser(source) { //create analyser node analyser = audioContext.createAnalyser(); //connect to source source.connect(analyzer); //pipe to speakers analyser.connect(audioContext.destination); } The data I’m really interested in here is frequency. Later we could look into amplitude or time, but for now I’m going to stick with frequency. The analyser node gives us frequency data via the getFrequencyByteData method. Don’t forget to count! To collect the data from the getFrequencyByteData method, we need to pass in an empty array (a JavaScript typed array is ideal). But how do we know how many items the array will need when we create it? This is really up to us and how high the resolution of frequencies we want to analyse is. Remember we talked about sampling the waveform; this happens at a certain rate (sample rate) which you can find out via the audio context’s sampleRate attribute. This is good to bear in mind when you’re thinking about your resolution of frequencies. var sampleRate = audioContext.sampleRate; Let’s say your file sample rate is 48,000, making the maximum frequency in the file 24,000Hz (thanks to a wonderful theorem from Dr Harry Nyquist, the maximum frequency in the file is always half the sample rate). The analyser array we’re creating will contain frequencies up to this point. This is ideal as the human ear hears the range 0–20,000hz. So, if we create an array which has 2,400 items, each frequency recorded will be 10Hz apart. However, we are going to create an array which is half the size of the FFT (fast Fourier transform), which in this case is 2,048 which is the default. You can set it via the fftSize property. //set our FFT size analyzer.fftSize = 2048; //create an empty array with 1024 items var frequencyData = new Uint8Array(1024); So, with an array of 1,024 items, and a frequency range of 24,000Hz, we know each item is 24,000 ÷ 1,024 = 23.44Hz apart. The thing is, we also want that array to be updated constantly. We could use the setInterval or setTimeout methods for this; however, I prefer the new and shiny requestAnimationFrame. function update() { //constantly getting feedback from data requestAnimationFrame(update); analyzer.getByteFrequencyData(frequencyData); } Putting it all together Sweet sticks! Now we have an array of frequencies from the sound we loaded, updating as the sound plays. Now we want that data to trigger our animation from earlier. We can easily pause and run our CSS animation from JavaScript: element.style.webkitAnimationPlayState = "paused"; element.style.webkitAnimationPlayState = "running"; Unfortunately, this may not be ideal as our animation might be a whole heap longer than just a flashing light. We may want to target specific points within that animation to have it stop and start in a visually pleasing way and perhaps not smack bang in the middle. There is no really easy way to do this at the moment as Zach Saucier explains in this wonderful article. It takes some jiggery pokery with setInterval to try to ascertain how far through the CSS animation you are in percentage terms. This seems a bit much for our proof of concept, so let’s backtrack a little. We know by the animation we’ve created which CSS properties we want to change. This is pretty easy to do directly with JavaScript. element.style.opacity = "1"; element.style.opacity = "0.2"; So let’s start putting it all together. For this example I want to trigger each light as a different frequency plays. For this, I’ll loop through the HTML elements and change the opacity style if the frequency gain goes over a certain threshold. //get light elements var lights = document.getElementsByTagName('i'); var totalLights = lights.length; for (var i=0; i<totalLights; i++) { //get frequencyData key var freqDataKey = i*8; //if gain is over threshold for that frequency animate light if (frequencyData[freqDataKey] > 160){ //start animation on element lights[i].style.opacity = "1"; } else { lights[i].style.opacity = "0.2"; } } See all the code in action here. I suggest viewing in a modern browser :) Awesome! It is true — we can VJ in our browser! Let’s dance! So, let’s start to expand this simple example. First, I feel the need to make lots of lights, rather than just a few. Also, maybe we should try a sound file more suited to gigs or clubs. Check it out! I don’t know about you, but I’m pretty excited — that’s just a bit of HTML, CSS and JavaScript! The other thing to think about, of course, is the sound that you would get at a venue. We don’t want to load sound from a file, but rather pick up on what is playing in real time. The easiest way to do this, I’ve found, is to capture what my laptop’s mic is picking up and piping that back into the audio context. We can do this by using getUserMedia. Let’s include this in this demo. If you make some noise while viewing the demo, the lights will start to flash. And relax :) There you have it. Sit back, play some music and enjoy the Winamp like experience in front of you. So, where do we go from here? I already have a wealth of ideas. We haven’t started with canvas, SVG or the 3-D features of CSS. There are other things we can detect from the audio as well. And yes, OK, it’s questionable whether the browser is the best environment for this. For one, I’m using a whole bunch of nonsensical HTML elements (maybe each animation could be held within a web component in the future). But hey, it’s fun, and it looks cool and sometimes I think it’s OK to just dance. 2013 Ruth John ruthjohn 2013-12-02T00:00:00+00:00 https://24ways.org/2013/make-your-browser-dance/ code
178 Make Out Like a Bandit If you are anything like me, you are a professional juggler. No, we don’t juggle bowling pins or anything like that (or do you? Hey, that’s pretty rad!). I’m talking about the work that we juggle daily. In my case, I’m a full-time designer, a half-time graduate student, a sometimes author and conference speaker, and an all-the-time social networker. Only two of these “positions” have actually put any money in my pocket (and, well, the second one takes a lot of money out). Still, this is all part of the work that I do. Your work situation is probably similar. We are workaholics. So if we work so much in our daily lives, shouldn’t we be making out like bandits? Umm, honestly, I’m not hitting on you, silly. I’m talking about our success. We work and work and work. Shouldn’t we be filthy, stinking rich? Well… okay, that’s not quite what I mean either. I’m not necessarily talking about money (though that could potentially be a part of it). I’m talking about success — as in feeling a true sense of accomplishment and feeling happy about what we do and why we do it. It’s important to feel accomplished and a general happiness in our work. To make out like a bandit (or have an incredible amount of success), you can either get lucky or work hard for it. And if you’re going to work hard for it, you might as well make it all meaningful and worthwhile. This is what I strive for in my own work and my life, and the following points I’m sharing with you are the steps I am taking to work toward this. I know the price of success: dedication, hard work & an unremitting devotion to the things you want to see happen. — Frank Lloyd Wright Learn. Participate. Do. The best way to get good at something is to keep doing whatever it is you’re doing that you want to be good at. For example, a sushi-enthusiast might take a sushi-making class because she wants to learn to make sushi for herself. It totally makes sense while the teacher demonstrates all the procedures, materials, and methods needed to make good, beautiful sushi. Later, the student goes home and tries to make sushi on her own, she gets totally confused and lost. Okay, I’m not even going to hide it, I’m talking about myself (this happened to me). As much as I love sushi, I couldn’t even begin to make good sushi because I’ve never really practiced. Take advantage of learning opportunities where possible. Whether you’re learning CSS, Actionscript, or visual design, the best way to grasp how to do things is to participate, practice, do. Apply what you learn in your work. Participation is so vital to your success. If you have problems, let people know, and ask. But definitely practice on your own. And as cliché as it may sound, believe in yourself because if you don’t think you can do it, no one else will think you can either. Maintain momentum With whatever it is you’re doing, if you find yourself “on a roll”, you should take advantage of that momentum and keep moving. Sure, you’ll definitely want to take breaks here or there, but remember that momentum can be very difficult to obtain again once you’ve lost it. Get it done! Deal with people Whether you love or hate people, the fact is, you gotta deal with them — even the difficult ones. If you’re in a management position, then you know pretty well that most people don’t like being told what to do (even if that’s their job). Find ways to get people excited about what they’re doing. Make people feel that they (and what they do) are needed — people respond better if they’re valued, not commanded. Even if you’re not in a management position, this still applies to the way you work with your coworkers, clients, vendors, etc. Resolve any conflicts right away. Conflicts will inevitably happen. Move on to how you can improve the situation, and do it as quickly as possible. Don’t spend too much time focusing on whose screw up it is — nobody feels good in this situation. Also, try to keep people informed on whatever it is you need or what it is you’re doing. If you’re waiting on something from someone, and it’s been a while, don’t be afraid to say something (tactfully). Sometimes people are forgetful — or just slacking. Hey, it happens! Help yourself by helping others What are some of the small, simple things you can do when you’re working that will help the people you work with (and in most cases, will end up helping yourself)? For example: if you’re a designer, perhaps taking a couple minutes now to organize and name your Photoshop layers will end up saving time later (since it will be easier to find things). This is going to help both you and your team. Or, developers: taking some time to write some documentation (even if it’s as simple as a comment in the code, or a well-written commit message) could potentially save valuable time for both you and your team later. Maybe you have to take a little time to sit down with a coworker and explain why something works the way it does. This helps them out tremendously — and will most likely lead to them respecting you a little more. This is a benefit. If you make little things like this a habit, people will notice. People will enjoy working with you. People will trust you and rely on you. Sure, it might seem beneficial at any given moment to be “in it for yourself” (and therefore only helping yourself), but that won’t last very long. Helping others (whether it be a small or large feat) will cause a positive impact in the long run — and that is what will be more valuable to you and your career. Do work that is meaningful One of the best ways to feel successful about what you do is to feel good and happy about it. And a great way to feel good and happy about what you’re doing is to actually do good. This could be purpose-driven work that focuses on sustainability and environmentalism, or work that helps support causes and charity. Perhaps the work simply inspires people. Or maybe the work is just something you are very passionate about. Whatever the work may be, try working on projects that are meaningful to you. You’ll do well simply by being more motivated and interested. And it’s a double-win if the project is meaningful to others as well. I feel very fortunate to work at a place like Crush + Lovely, where we have found quite frequently that the projects that inspire people, focus on global and social good, and create some sort of positive impact are the very projects that bring us more paid projects. But more importantly, we are happy and excited to do it. You might not work at a company that takes on those types of projects. But perhaps you have your own personal endeavors that create this excitement for you. Elliot Jay Stocks wrote about having pet projects. Do you take on side projects? What are those projects? Over the last couple years, I’ve seen some really fantastic side projects come out that are great examples of meaningful work. These projects reflect the passions and goals of the respective designers and developers involved, and therefore become quite successful (because the people involved simply love what they are doing while they’re doing it). Some of these projects include: Typedia is a shared encyclopedia of typefaces which serves as a resource to classify, categorize, and connect typefaces. It was founded by Jason Santa Maria, a graphic designer with a love and passion for typography. He created it as a solution to a problem he faced as a designer: finding the right typeface. Huffduffer was created by Jeremy Keith, a web developer who wanted to create a podcast of inspirational talks — but after he found that this could be tedious, he decided to create a tool to automate this. Level & Tap was created by passionate photographer and web developer, Tom Watson. It began as a photography print store for Tom’s best personal photography. Over time, more photographers were added to the site and the site has grown to become quite a great collection of beautiful photography. Heat Eat Review is a review blog created by information architect and user experience designer, Abi Jones. As a foodie, she is able to use this passion for this blog, as it focuses on reviewing TV Dinners, Frozen Meals, and Microwavable Foods. Art in My Coffee, a favorite personal project of my own, is a photo blog of coffee art I created, after I found that my friends and I were frequently posting coffee art photos to Flickr, Twitter, and other websites. After the blog became more popular, I teamed up with Meagan Fisher on the project, who has just as much a passion for coffee art, if not more. So, what’s important to you? This is the very, very important question here. What really matters to you most? Beyond just working on meaningful projects you are passionate about, is the work you’re doing the right work for you, so that you can live a good lifestyle? Scott Boms wrote an excellent article, Burnout, in which he shares his own experience in battling stress and exhaustion, and what he learned from it. You should definitely read the article in its entirety, but a couple of his points that are particularly excellent are: Make time for numero uno, in which you make time for the things in life that make you happy Examine your values, goals, and measures of success, in which you work toward the things you are passionate about, your own personal development, and focusing on the things that matter. A solid work-life balance can be a challenging struggle to obtain. Of course, you can cheat this by finding ways to combine the things you love with the things you do (so then it doesn’t even feel like you’re working — oh, you sneaky little bandit!). However, there are other factors to consider beyond your general love for the work you’re doing. Take proper care of yourself physically, mentally, and socially. So, are you making out like a bandit? Do you feel accomplished and generally happy with your work? If not, perhaps that is something to focus on for the next year. Consider your work (both in your job as well as any side projects you may take on) and how it benefits you — present and future. Take any steps necessary to get you to where you need to be. If you are miserable, fix it! Finally, it’s important to be thankful for the things that matter to you and make you happy. Pass it along everyday. Thank people. It’s a simple thing, really. Saying “thank you” can and will have enormous impact on the people around you. Oh. And, I apologize if the title of this article led you to thinking it would teach you how to be an amazing kisser. That’s a different article entirely for 24 ways to impress your friends! 2009 Jina Anne jina 2009-12-21T00:00:00+00:00 https://24ways.org/2009/make-out-like-a-bandit/ business
201 Lint the Web Forward With Sonarwhal Years ago, when I was in a senior in college, much of my web development courses focused on two things: the basics like HTML and CSS (and boy, do I mean basic), and Adobe Flash. I spent many nights writing ActionScript 3.0 to build interactions for the websites that I would add to my portfolio. A few months after graduating, I built one website in Flash for a client, then never again. Flash was dying, and it became obsolete in my résumé and portfolio. That was my first lesson in the speed at which things change in technology, and what a daunting realization that was as a new graduate looking to enter the professional world. Now, seven years later, I work on the Microsoft Edge team where I help design and build a tool that would have lessened my early career anxieties: sonarwhal. Sonarwhal is a linting tool, built by and for the web community. The code is open source and lives under the JS Foundation. It helps web developers and designers like me keep up with the constant change in technology while simultaneously teaching how to code better websites. Introducing sonarwhal’s mascot Nellie Good web development is hard. It is more than HTML, CSS, and JavaScript: developers are expected to have a grasp of accessibility, performance, security, emerging standards, and more, all while refreshing this knowledge every few months as the web evolves. It’s a lot to keep track of.   Web development is hard Staying up-to-date on all this knowledge is one of the driving forces for developing this scanning tool. Whether you are just starting out, are a student, or you have over a decade of experience, the sonarwhal team wants to help you build better websites for all browsers. Currently sonarwhal checks for best practices in five categories: Accessibility, Interoperability, Performance, PWAs, and Security. Each check is called a “rule”. You can configure them and even create your own rules if you need to follow some specific guidelines for your project (e.g. validate analytics attributes, title format of pages, etc.). You can use sonarwhal in two ways: An online version, that provides a quick and easy way to scan any public website. A command line tool, if you want more control over the configuration, or want to integrate it into your development flow. The Online Scanner The online version offers a streamlined way to scan a website; just enter a URL and you will get a web page of scan results with a permalink that you can share and revisit at any time. The online version of sonarwal When my team works on a new rule, we spend the bulk of our time carefully researching each subject, finding sources, and documenting it rather than writing the rule’s code. Not only is it important that we get you the right results, but we also want you to understand why something is failing. Next to each failing rule you’ll find a link to its detailed documentation, explaining why you should care about it, what exactly we are testing, examples that pass and examples that don’t, and useful links to even more in-depth documentation if you are interested in the subject. We hope that between reading the documentation and continued use of sonarwhal, developers can stay on top of best practices. As devs continue to build sites and identify recurring issues that appear in their results, they will hopefully start to automatically include those missing elements or fix those pieces of code that are producing errors. This also isn’t a one-way communication: the documentation is not only available on the sonarwhal site, but also on GitHub for editing so you can help us make it even better! A results report The current configuration for the online scanner is very strict, so it might hurt your feelings (it did when I first tested it on my personal website). But you can configure sonarwhal to any level of strictness as well as customize the command line tool to your needs! Sonarwhal’s CLI  The CLI gives you full control of sonarwhal: what rules to use, tweaks to them, domains that are out of your control, and so on. You will need the latest node LTS (v8) or Stable (v9) and your favorite package manager, such as npm: npm install -g sonarwhal You can now run sonarwhal from anywhere via: sonarwhal https://example.com Using the CLI The configuration is done via a .sonarwhalrc file. When analyzing a site, if no file is available, you will be prompted to answer a series of questions: What connector do you want to use? Connectors are what sonarwhal uses to access a website and gather all the information about the requests, resources, HTML, etc. Currently it supports jsdom, Microsoft Edge, and Google Chrome. What formatter? This is how you want to see the results: summary, stylish, etc. Make sure to look at the full list. Some are concise for, perfect for a quick build assessment, while others are more verbose and informative. Do you want to use the recommended rules configuration? Rules are the things we are validating. Unless you’ve read the documentation and know what you are doing, first timers should probably use the recommended configuration. What browsers are you targeting? One of the best features of sonarwhal is that rules can adapt their feedback depending on your targeted browsers, suggesting to add or remove things. For example, the rule “Highest Document Mode” will tell you to add the “X-UA-Compatible” header if IE10 or lower is targeted or remove if the opposite is true. sonarwhal configuration generator questions Once you answer all these questions the scan will start and you will have a .sonarwhalrc file similar to the following: { "connector": { "name": "jsdom", "options": { "waitFor": 1000 } }, "formatters": "stylish", "rulesTimeout": 120000, "rules": { "apple-touch-icons": "error", "axe": "error", "content-type": "error", "disown-opener": "error", "highest-available-document-mode": "error", "validate-set-cookie-header": "warning", // ... } } You should see the scan initiate in the command line and within a few seconds the results should start to appear. Remember, the scan results will look different depending on which formatter you selected so try each one out to see which one you like best. sonarwhal results on my website and hurting my feelings 💔 Now that you have a list of errors, you can get to work improving the site! Note though, that when you scan your website, it scans all the resources on that page and if you’ve added something like analytics or fonts hosted elsewhere, you are unable to change those files. You can configure the CLI to ignore files from certain domains so that you are only getting results for files you are in control of. The documentation should give enough guidance on how to fix the errors, but if it’s insufficient, please help us and suggest edits or contribute back to it. This is a community effort and chances are someone else will have the same question as you. When I scanned both my websites, sonarwhal alerted me to not having an Apple Touch Icon. If I search on the web as opposed to using the sonarwhal documentation, the first top 3 results give me outdated information: I need to include many different icon sizes. I don’t need to include all the different size icons that target different devices. Declaring one icon sized 180px x 180px will provide a large enough icon for devices and it will scale down as appropriate for people on older devices. The information at the top of the search results isn’t always the correct answer to an issue and we don’t want you to have to search through outdated documentation. As sonarwhal’s capabilities expand, the goal is for it to be the one stop shop for helping preflight your website. The journey up until now and looking forward On the Microsoft Edge team, we’re passionate about empowering developers to build great websites. Every day we see so many sites come through our issue tracker. (Thanks for filing those bugs, they help us make Microsoft Edge better and better!) Some issues we see over and over are honest mistakes or outdated ‘best practices’ that could be avoided, so we built this tool to help everyone help make the web a better place. When we decided to create sonarwhal, we wanted to create a tool that would help developers write better and more up-to-date code for their websites. We want sonarwhal to be useful to anyone so, early on, we defined three guiding principles we’ve used along the way: Community Driven. We build for the community’s best interests. The web belongs to everyone and this project should too. Not only is it open source, we’ve also donated it to the JS Foundation and have an inclusive governance model that welcomes the collaboration of anyone, individual or company. User Centric. We want to put the user at the center, making sonarwhal configurable for your needs and easy to use no matter what your skill level is. Collaborative. We didn’t want to reinvent the wheel, so we collaborated with existing tools and services that help developers build for the web. Some examples are aXe, snyk.io, Cloudinary, etc. This is just the beginning and we still have lots to do. We’re hard at work on a backlog of exciting features for future releases, such as: New rules for a variety of areas like performance, accessibility, security, progressive web apps, and more. A plug-in for Visual Studio Code: we want sonarwhal to help you write better websites, and what better moment than when you are in your editor. Configuration options for the online service: as we fine tune the infrastructure, the rule configuration for our scanner is locked, but we look forward to adding CLI customization options here in the near future. This is a tool for the web community by the web community so if you are excited about sonarwhal, making a better web, and want to contribute, we have a few issues where you might be able to help. Also, don’t forget to check the rest of the sonarwhal GitHub organization. PRs are always welcome and appreciated! Let us know what you think about the scanner at @NarwhalNellie on Twitter and we hope you’ll help us lint the web forward! 2017 Stephanie Drescher stephaniedrescher 2017-12-02T00:00:00+00:00 https://24ways.org/2017/lint-the-web-forward-with-sonarwhal/ code
195 Levelling Up for Junior Developers If you are a junior developer starting out in the web industry, things can often seem a little daunting. There are so many things to learn, and as soon as you’ve learnt one framework or tool, there seems to be something new out there. I am lucky enough to lead a team of developers building applications for the web. During a recent One to One meeting with one of our junior developers, he asked me about a learning path and the basic fundamentals that every developer should know. After a bit of digging around, I managed to come up with a (not so exhaustive) list of principles that was shared with him. In this article, I will share the list with you, and hopefully help you level up from junior developer and become a better developer all round. This list doesn’t focus on an particular programming language, but rather coding concepts as a whole. The idea behind this list is that whether you are a front-end developer, back-end developer, full stack developer or just a curious one, these principles apply to everyone that writes code. I have tried to be technology agnostic, so that you can use these tips to guide you, whatever your tech stack might be. Without any further ado and in no particular order, let’s get started. Refactoring code like a boss The Boy Scouts have a rule that goes “always leave the campground cleaner than you found it.” This rule can be applied to code too and ensures that you leave code cleaner than you found it. As a junior developer, it’s almost certain that you will either create or come across older code that could be improved. The resources below are a guide that will help point you in the right direction. My favourite book on this subject has to be Clean Code by Robert C. Martin. It’s a must read for anyone writing code as it helps you identify bad code and shows you techniques that you can use to improve existing code. If you find that in your day to day work you deal with a lot of legacy code, Improving Existing Technology through Refactoring is another useful read. Design Patterns are a general repeatable solution to a commonly occurring problem in software design. My friend and colleague Ranj Abass likes to refer to them as a “common language” that helps developers discuss the way that we write code as a pattern. My favourite book on this subject is Head First Design Patterns which goes right back to the basics. Another great read on this topic is Refactoring to Patterns. Working Effectively With Legacy Code is another one that I found really valuable. Improving your debugging skills A solid understanding of how to debug code is a must for any developer. Whether you write code for the web or purely back-end code, the ability to debug will save you time and help you really understand what is going on under the hood. If you write front-end code for the web, one of my favourite resources to help you understand how to debug code in Chrome can be found on the Chrome Dev Tools website. While some of the tips are specific to Chrome, these techniques apply to any modern browser of your choice. At Settled, we use Node.js for much of our server side code. Without a doubt, our most trusted IDE has to be Visual Studio Code and the built-in debuggers are amazing. Regardless of whether you use Node.js or not, there are a number of plugins and debuggers that you can use in the IDE. I recommend reading the website of your favourite IDE for more information. As a side note, it is worth mentioning that Chrome Developer Tools actually has functionality that allows you to debug Node.js code too. This makes it a seamless transition from front-end code to server-side code debugging. The Debugging Mindset is an informative online article by Devon H. O’Dell and discusses the the psychology of learning strategies that lead to effective problem-solving skills. A good understanding of relational databases and NoSQL databases Almost all developers will need to persist data at some point in their career. Even if you don’t write SQL queries in your day to day job, a solid understanding of how they work will help you become a better developer. If you are a complete newbie when it comes to databases, I recommend checking out Code Academy. They offer a free online course that can help you get your head around how relational databases work. The course is quite basic, but is a useful hands-on approach to learning this topic. This article provides a great explainer for the difference between the SQL and NoSQL databases, and this Stackoverflow answer goes a little deeper into the subject of the two database types. If you’d like to learn more about NoSQL queries, I would recommend starting with this article on MongoDB queries. Unfortunately, there isn’t one overall course as most NoSQL databases have their own syntax. You may also have noticed that I haven’t included other types of databases such as Graph or In-memory; it’s worth focussing on the basics before going any deeper. Performance on the web If you build for the web today, it is important to understand how the browser receives and renders the content that you send it. I am pretty passionate about Web Performance, and hope that everyone can learn how to make websites faster and more efficient. It can be fun at the same time! Steve Souders High Performance Websites is the godfather of web performance books. While it was created a few years ago and many of the techniques might have changed slightly, it is the original book on the subject and set up many of the ground rules that we know about web performance today. A free online resource on this topic is the Google Developers website. The site is an up to date guide on the best web performance techniques for your site. It is definitely worth a read. The network plays a key role in delivering data to your users, and it plays a big role in performance on the web. A fantastic book on this topic is Ilya Grigorik’s High Performance Browser Networking. It is also available to read online at hpbn.co. Understand the end to end architecture of your software project I find that one of the best ways to improve my knowledge is to learn about the architecture of the software at the company I work at. It gives you a good understanding as to why things are designed the way they are, why certain decisions were made, and gives you an understanding of how you might do things differently with hindsight. Try and find someone more senior, such as a Technical Lead or Software Architect, at your company and ask them to explain the overall architecture and draw a few high-level diagrams for you. Not to mention that they will be impressed with your willingness to learn. I recommend reading Clean Architecture: A Craftsman’s Guide to Software Structure and Design for more detail on this subject. Far too often, software projects can be over-engineered and over-architected, it is worth reading Just Enough Software Architecture. The book helps developers understand how the smallest of changes can affect the outcome of your software architecture. How are things deployed A big part of creating software is actually shipping it! How is the software at your company released into the wild? Does your company do Continuous Integration? Continuous Deployment? Even if you answered no to any of these questions, it is worth finding someone with the knowledge in your company to explain these things to you. If it is not already documented, perhaps you could start a wiki to document everything you’re learning about the system - this is a great way to level up and be appreciated and invaluable. A streamlined deployment process is a beautiful thing, and understanding how they work can help you grow your knowledge as a developer. Continuous Integration is a practical read on the ins and outs of implementing this deployment technique. Docker is another great tool to use when it comes to software deployment. It can be tricky at first to wrap your head around, but it is definitely worth learning about this great technology. The documentation on the website will teach and guide you on how to get started using Docker. Writing Tests Testing is an essential tool in the developer bag of skills. They help you to make big refactoring changes to your code, and feel a lot more confident knowing that your changes haven’t broken anything. There are so many benefits to testing, which make it so important for developers at every level to become acquainted with it/them. The book that started it all for me was Roy Osherove’s The Art of Unit Testing. The code in the book is written in C#, but the principles apply to every language. It’s a great, easy-to-understand read. Another great read is How Google Tests Software and covers exactly what it says on the tin. It covers many different testing techniques such as exploratory, black box, white box, and acceptance testing and really helps you understand how large organisations test their code. Soft skills Whilst reading through this article, you’ve probably noticed that a large chunk of it focusses on code and technical ability. Without a doubt, I’d say that it is even more important to be a good teammate. If you look up the definition of soft skills in the dictionary, it is defined as “personal attributes that enable someone to interact effectively and harmoniously with other people” and I think that it sums this up perfectly. Working on your “soft skills” is something that can truly help you level up in your career. You may be the world’s greatest coder, but if you colleagues can’t get along with you, your coding skills won’t matter! While you may not learn how to become the perfect co-worker overnight, I really try and live by the motto “don’t be an arsehole”. Think about how you like to be treated and then try and treat your co-workers with the same courtesy and respect. The next time you need to make a decision at work, ask yourself “is this something an arsehole would do”? If you answered yes to that question, you probably shouldn’t do it! Summary Levelling up as a junior developer doesn’t have to be scary. Focus on the fundamentals and they should hold you in good stead, regardless of the new things that come along. Software engineering is built on these great principles that have stood the test of time. Whilst researching for this article, I came across a useful Github repo that is worth mentioning. Things Every Programmer Should Know is packed with useful information. I have to admit, I didn’t know everything on there! I hope that you have found this list helpful. Some of the topics I have mentioned might not be relevant for you at this stage in your career, but should give a nudge in the right direction. After all, knowledge is power! If you are a junior developer reading this article, what would you add to it? 2017 Dean Hume deanhume 2017-12-05T00:00:00+00:00 https://24ways.org/2017/levelling-up-for-junior-developers/ code
2 Levelling Up Hello, 24 ways. I’m Ashley and I sell property insurance. I’m interrupting your Christmas countdown with an article about rental property software and a guy, Pete, who selflessly encouraged me to build my first web app. It doesn’t sound at all festive, or — considering I’ve used both “insurance” and “rental property” — interesting, but do stick with me. There’s eggnog at the end. I run a property insurance business, Brokers Direct. It’s a small operation, but well established. We’ve been selling landlord insurance on the web for over thirteen years, for twelve of which we have provided our clients with third-party software for managing their rental property portfolios. Free. Of. Charge. It sounds like a sweet deal for our customers, but it isn’t. At least, not any more. The third-party software is victim to years of neglect by its vendor. Its questionable interface, garish visuals and, ahem, clip art icons have suffered from a lack of updates. While it was never a contender for software of the year, I’ve steadily grown too embarrassed to associate my business with it. The third-party rental property software we distributed I wanted to offer my customers a simple, clean and lightweight alternative. In an industry that’s dominated by dated and bloated software, it seemed only logical that I should build my own rental property tool. The long learning-to-code slog Learning a programming language is daunting, the source of my frustration stemming from a non-programming background. Generally, tutorials assume a degree of familiarity with programming, whether it be tools, conventions or basic skills. I had none and, at the time, there was nothing on the web really geared towards a novice. I reached the point where I genuinely thought I was just not cut out for coding. Surrendering to my feelings of self-doubt and frustration, I sourced a local Rails developer, Pete, to build it for me. Pete brought a pack of index cards to our meeting. Index cards that would represent each feature the rental property software would launch with. “OK,” he began. “We’ll need a user model, tenant model, authentication, tenant and property relationships…” A dozen index cards with a dozen features lined the coffee table in a grid-like format. Logical, comprehensible, achievable. Seeing the app laid out in a digestible manner made it seem surmountable. Maybe I could do this. “I’ve been trying to learn Rails…”, I piped up. I don’t know why I said it. I was fully prepared to hire Pete to do the hard work for me. But Pete, unprompted, gathered the index cards and neatly stacked them together, coasting them across the table towards me. “You should build this”. Pete, a full-time freelance developer at the time, was turning down a paying job in favour of encouraging me to learn to code. Looking back, I didn’t realise how significant this moment was. That evening, I took Pete’s index cards home to make a start on my app, slowly evolving each of the cards into a working feature. Building the app solo, I turned to Stack Overflow to solve the inevitable coding hurdles I encountered, as well as calling on a supportive Rails community. Whether they provided direct solutions to my programming woes, or simply planted a seed on how to solve a problem, I kept coding. Many months later, and after several more doubtful moments, Lodger was born. Property overview of my app, Lodger. If I can do it, so can you I misspent a lot of time building Twitter and blogging applications (apparently, all Rails tutorials centre around Twitter and blogging). If I could rewind and impart some advice to myself, this is what I’d say. There’s no magic formula “I haven’t quite grasped Rails routing. I should tackle another tutorial.” Making excuses — or procrastination — is something we are all guilty of. I was waiting for a programming book that would magically deposit a grasp of the entire Ruby syntax in my head. I kept buying books thinking each one would be the one where it all clicked. I now have a bookshelf full of Ruby material, all of which I’ve barely read, and none of which got me any closer to launching my web app. Put simply, there’s no magic formula. Break it down Whatever it is you want to build, break it down into digestible chunks. Taking Pete’s method as an example, having an index card represent an individual feature helped me tremendously. Tackle one at a time. Even if each feature takes you a month to build, and you have eight features to launch with, after eight months you’ll have your MVP. Remember, if you do nothing each day, it adds up to nothing. Have a tangible product to build I have a wonderful habit of writing down personal notes, usually to express my feelings at the time or to log an idea, only to uncover them months or years down the line, long after I forgot I had written them. I made a timely discovery while writing this article, discovering this gem while flicking through a battered Moleskine: “I don’t seem to be making good progress with learning Rails, but development still excites me. I should maybe stop doing tutorials and work towards building a specific app.” Having a real product to work on, like I did with Lodger, means you have something tangible to apply the techniques you are learning. I found this prevented me from flitting aimlessly between tutorials and books, which is an easy area to accidentally remain in. Team up If possible, team up with a designer and create something together. Designers are great at presenting features in a way you’d never have considered. You will learn a lot from making their designs come to life. Your homework for the holiday Despite having a web app under my belt, I am not a programmer. I tinker with code, piecing enough bits of it together to make something functional. And that’s OK! I’m not excusing sloppiness, but if we aimed for perfection every time, we’d never execute any of our ideas. As the holidays approach and you’ve exhausted yet another viewing of The Muppet Christmas Carol (or is that just my guilty pleasure at Christmas?), you may have time on your hands. Time to explore an idea you’ve been sitting on, but — plagued with procrastination and doubt — have yet to bring to life. This holiday, I am here to say to you what Pete said to me. You should build this. You don’t need to be the next Mark Zuckerberg or Larry Page. You just have to learn enough to get it done. PS: I lied about the eggnogg, but try capturing somebody’s attention when you tell them you sell property insurance! 2013 Ashley Baxter ashleybaxter 2013-12-06T00:00:00+00:00 https://24ways.org/2013/levelling-up/ business
199 Knowing the Future - Tips for a Happy Launch Day You’ve chosen your frameworks and libraries. You’ve learned how to write code which satisfies the buzzword and performance gods. Now you need to serve it to a global audience, and make things easy to preview, to test, to sign-off, and to evolve. But infrastructure design is difficult and boring for most of us. We just want to get our work out into the wild. If only we had tools which would let us go, “Oh yeah! It all deploys perfectly every time” and shout, “You need another release? BAM! What’s next?” A truth that can be hard to admit is that very often, the production environment and its associated deployment processes are poorly defined until late into a project. This can be a problem. It makes my palms sweaty just thinking about it. If like me, you have spent time building things for clients, you’ll probably have found yourself working with a variety of technical partners and customers who bring different constraints and opportunities to your projects. Knowing and proving the environments and the deployment processes is often very difficult, but can be a factor which profoundly impacts our ability to deliver what we promised. To say nothing of our ability to sleep at night or leave our fingernails un-chewed. Let’s look at this a little, and see if we can’t set you up for a good night’s sleep, with dry palms and tidy fingernails. A familiar problem You’ve been here too, right? The project development was tough, but you’re pleased with what you are running in your local development environments. Now you need to get the client to see and approve your build, and hopefully indicate with a cheery thumbs up that it can “go live”. Chances are that we have a staging environment where the client can see the build. But be honest, is this exactly the same as the production environment? It should be, but often it’s not. Often the staging environment is nothing more than a visible server with none of the optimisations, security, load balancing, caching, and other vital bits of machinery that we’ll need (and need to test) in “prod”. Often the production environment is still being “set up” and you’ll have to wait and see. In development, “wait and see” is the enemy. Instead of waiting to see, we need to make the provisioning of, and deployment to our different environments one of the very first jobs of our project. I’ve often needed to be the unpopular voice in the room who makes a big fuss when this is delayed. I’ve described it as being a “critical blocker” during project meetings and suggested that everything should halt until it is fixed. It is that important. Clients don’t often like hearing a wary, disruptive voice saying “whoa there Nelly!”, because the development should be able to continue while the production environment gets sorted out, right? Sure. But if it is not seen as a blocker, it is seen as something that can just happen later. And if it happens later, all the ugly surprises and unknowns surface later too. And later is when we’ll need to be thinking about other things. Not the plumbing. Trust me, it pays to face up to the issue right away rather than press on optimistically. The client will thank you later. Attitudes and expectations We should, I think, exhibit these four attitudes towards production deployment: Make it scripted Make it automated Make it real Make it first Make it scripted Let’s face it, we are going to need to deploy more than once over the course of the project. We are not going to get things perfect on our first shot. Nor should we expect to. And if we are going to repeat something, we want to be able to do it identically and predictably every time without needing to rely on our memories. Developers are great at scripting things which they would otherwise need to repeat. It makes us faster and it also helps us keep track of the steps we need to take. I’m not crazy enough to try suggest the best technology to script your builds or deployments (holy wars lie down that path). A lot will depend on your languages and your tastes. Some will like Fabric, others will prefer Gulp, you might prefer Make or NPM. It doesn’t really matter as long as you can script the process of building, packaging and deploying your project. Wait. Won’t we need to know everything about the build from the start in order to do this? Aren’t our dependencies likely to change over time? Yes. That would be ideal. But it’s ok. Like our code, our deployment script will evolve over the life of a project. So evolve it. Start by scripting what is needed to support the first iteration of the project, and then maintain that script. It will become a valuable “source of truth”, providing a form of documentation of what your project needs for a successful deployment. Another bonus. Make it automated If we have a scripted deployment which we can run by executing a single command, then we are in great shape to automate that process by triggering the build and deployment via suitable events. Again, I prefer not to offer one single suggestion of when this should occur. That will depend on your approach to the project, how your development team is organised, and how your QA team operate. You can tune this to suit. For one project I worked on, we chose to trigger the build and deployment to our production environment every time we used Git to tag the master branch of our version control repository. There were a few moving parts, and we needed to do some upfront work to get everything working, but that upfront effort was repaid many fold as we deployed time and time again, and exposed some issues with our environment long before we got to “launch day”. With a scripted and automated process, we can make deployments “cheap”. This is our goal. When there are minimal cognitive or time overheads associated with deploying, we’re likely to do it all the more often and become more confident that it will behave as expected. Make it real Alright, we have written scripts to build and deploy our projects. Anyone tagging our repo will trigger things to happen as if by magic, but where are we pushing things to? We need to target a real environment if this is to have any value. A useful pattern is to have all activity on our develop branch trigger deployments to our staging server. Meanwhile tagging master will deploy a version to the production environment. How we organise this will depend on our git branching approach. (I’ve seen as many ways of approaching Git Flow as I have seen ways of approaching “Agile”). It’s vital though, that we ensure that we are deploying to, and testing against, our real infrastructure. We want to see real results. That’s the best way to learn real lessons. Make it first Building our site to run in an environment not yet fully defined or available to test is like climbing without ropes – it’s possible, but we put ourselves at risk. And the higher we climb the greater the risk. So it is important to do this as early as we possibly can. Don’t have a certificate for our HTTPS yet? Fine, but let’s still deploy to this evolving production environment and introduce HTTPS as soon as we can. Before we know it we’ll be proving that this is set up correctly and we’ll not be surprised by mixed security alerts or other nasties further down the line. Mailchimp perfectly capture the anxiety of sending emails to gazillions of people for a campaign. But we’re lucky. Launching a site doesn’t need to be like performing a mailshot. We can do things to banish that sweaty hand. Doing preparation work upfront means that by the time we need to launch the site into the wild, we have exercised the deployment mechanics, and tested the production environment so rigorously that this task will be boring. (It won’t be boring. Launching should always be exciting because the world will finally get to see our beautiful, painstaking work. But nor should it be terrifying. Especially as a result of not knowing for certain if our processes and environments are going to work or burst into flames on the big day.) What tools exist? Well this all sounds lovely. But how should we tackle this? Where are the tools for us to use? As it happens, there are many service and tools that we can use to work this way. Hosting All of the big players like Amazon, Azure and Google offer tools which can help us here. Google for example, can host multiple deployed versions of your project in parallel and you can manage them via their App Engine console. Each build receives its own URL which you can use to access any deployed version of your site. Having immutable deployments which stick around in perpetuity (or until you bin them) is a key feature which unlocks the ability to confidently direct your traffic to any version of your site. With that comes the capacity to test any version or feature in its real environment, and then promote a version, or rollback to a previous version whenever you want. A liberating power to have. Continuous integration In order to create all of those different versions, we’ll need somewhere to run our build and deployment scripts. Jenkins has been a popular Continuous Integration (CI) option for some time, and can be configured to perform all sorts of tasks, giving you extensive control over your deployment pipeline. You need to host Jenkins yourself, but it provides some simple ways to do that. The landscape for CI is getting richer and richer. With many hosted services like Circle CI providing this kind of automation up in the cloud. One stop shop Netlify combines both hosting and continuous integration services. It monitors your git repositories and automatically runs your build in a container on its servers when it finds changes. Each branch and pull request in your git repository will result in an immutable version of your site with its own URL. Netlify is unlike Google Cloud, AWS or Azure in that it cannot host a dynamic server-side application for you. Instead it specialises in hosting static, or so called JAMstack sites. Personally, I find that its simplicity makes it an approachable option, and a good place to learn and adopt some of these valuable habits. Full disclosure: I’m a Netlify employee. But before I was, I was an avid customer, and it was through using Netlify that I first encountered some of these principles in practice. Conclusion. It’s all about the approach No matter what tools or services you use (and there are many which can support these practices), the most important thing is to adopt an approach which lets you prove your environments as quickly as possible. Front-loading this effort will cast light onto the issues that you’ll need to address early and often, leaving no infrastructure surprises to spoil things for you on launch day. Automating the process will mean that when you do find things that you need to fix or to improve later (and you will), issuing another release will be trivial. It is a lovely feeling when you have confidence that releasing v1.0.0 will be no more stressful v0.0.1. In fact it should actually be less stressful, as you’ll have been down this road many times by then. Fixing the potholes and smoothing the way as you went. From here, it should be a smooth ride. 2017 Phil Hawksworth philhawksworth 2017-12-21T00:00:00+00:00 https://24ways.org/2017/knowing-the-future/ process
129 Knockout Type - Thin Is Always In OS X has gorgeous native anti-aliasing (although I will admit to missing 10px aliased Geneva — *sigh*). This is especially true for dark text on a light background. However, things can go awry when you start using light text on a dark background. Strokes thicken. Counters constrict. Letterforms fill out like seasonal snackers. So how do we combat the fat? In Safari and other Webkit-based browsers we can use the CSS ‘text-shadow’ property. While trying to add a touch more contrast to the navigation on haveamint.com I noticed an interesting side-effect on the weight of the type. The second line in the example image above has the following style applied to it: This creates an invisible drop-shadow. (Why is it invisible? The shadow is positioned directly behind the type (the first two zeros) and has no spread (the third zero). So the color, black, is completely eclipsed by the type it is supposed to be shadowing.) Why applying an invisible drop-shadow effectively lightens the weight of the type is unclear. What is clear is that our light-on-dark text is now of a comparable weight to its dark-on-light counterpart. You can see this trick in effect all over ShaunInman.com and in the navigation on haveamint.com and Subtraction.com. The HTML and CSS source code used to create the example images used in this article can be found here. 2006 Shaun Inman shauninman 2006-12-17T00:00:00+00:00 https://24ways.org/2006/knockout-type/ code
24 Kill It With Fire! What To Do With Those Dreaded FAQs In the mid-1640s, a man named Matthew Hopkins attempted to rid England of the devil’s influence, primarily by demanding payment for the service of tying women to chairs and tossing them into lakes. Unsurprisingly, his methods garnered criticism. Hopkins defended himself in The Discovery of Witches in 1647, subtitled “Certaine Queries answered, which have been and are likely to be objected against MATTHEW HOPKINS, in his way of finding out Witches.” Each “querie” was written in the voice of an imagined detractor, and answered in the voice of an imagined defender (always referring to himself as “the discoverer,” or “him”): Quer. 14. All that the witch-finder doth is to fleece the country of their money, and therefore rides and goes to townes to have imployment, and promiseth them faire promises, and it may be doth nothing for it, and possesseth many men that they have so many wizzards and so many witches in their towne, and so hartens them on to entertaine him. Ans. You doe him a great deale of wrong in every of these particulars. Hopkins’ self-defense was an early modern English FAQ. Digital beginnings Question and answer formatting certainly isn’t new, and stretches back much further than witch-hunt days. But its most modern, most notorious, most reviled incarnation is the internet’s frequently asked questions page. FAQs began showing up on pre-internet mailing lists as a way for list members to answer and pre-empt newcomers’ repetitive questions: The presumption was that new users would download archived past messages through ftp. In practice, this rarely happened and the users tended to post questions to the mailing list instead of searching its archives. Repeating the “right” answers becomes tedious… When all the users of a system can hear all the other users, FAQs make a lot of sense: the conversation needs to be managed and manageable. FAQs were a stopgap for the technological limitations of the time. But the internet moved past mailing lists. Online information can be stored, searched, filtered, and muted; we choose and control our conversations. New users no longer rely on the established community to answer their questions for them. And yet, FAQs are still around. They’re a content anti-pattern, replicated from site to site to solve a problem we no longer have. What we hate when we hate FAQs As someone who creates and structures online content – always with the goal of making that content as useful as possible to people – FAQs drive me absolutely batty. Almost universally, FAQs represent the opposite of useful. A brief list of their sins: Double trouble Duplicated content is practically a given with FAQs. They’re written as though they’ll be accessed in a vacuum – but search results, navigation patterns, and curiosity ensure that users will seek answers throughout the site. Is our goal to split their focus? To make them uncertain of where to look? To divert them to an isolated microcosm of the website? Duplicated content means user confusion (to say nothing of the duplicated workload for maintaining content). Leaving the job unfinished Many FAQs fail before they’re even out of the gate, presenting a list of questions that’s incomplete (too short and careless to be helpful) or irrelevant (avoiding users’ real concerns in favor of soundbites). Alternately, if the right questions are there, the answers may be convoluted, jargon-heavy, or otherwise difficult to understand. Long lists of not-my-question Getting a single answer often means sifting through a haystack of questions. For each potential question, the user must read, comprehend, assess, move on, rinse, repeat. That’s a lot of legwork for little reward – and a lot of opportunity for mistakes. Users may miss their question, or they may fail to recognize a differently worded version of their question, or they may not notice when their sought-after answer appears somewhere they didn’t expect. The ventriloquist act FAQs shift the point of view. While websites speak on behalf of the organization (“our products,” “our services,” “you can call us for assistance,” etc.), FAQs speak as the user – “I can’t find my password” or “How do I sign up?” Both voices are written from the first-person perspective, but speak for different entities, which is disorienting: it breaks the tone and messaging across the website. It’s also presumptuous: why do you get to speak for the user? These all underscore FAQs’ fatal flaw: they are content without context, delivered without regard for the larger experience of the website. You can hear the absurdity in the name itself: if users are asking the same questions so frequently, then there is an obvious gulf between their needs and the site content. (And if not, then we have a labeling problem.) Instead of sending users to a jumble of maybe-it’s-here-maybe-it’s-not questions, the answers to FAQs should be found naturally throughout a website. They are not separated, not isolated, not other. They are the content. To present it otherwise is to create a runaround, and users know it. Jay Martel’s parody, “F.A.Q.s about F.A.Q.s” captures the silliness and frustration of such a system: Q: Why are you so rude? A: For that answer, you would have to consult an F.A.Q.s about F.A.Q.s about F.A.Q.s. But your time might be better served by simply abandoning your search for a magic answer and taking responsibility for your own profound ignorance. FAQs aren’t magic answers. They don’t resolve a content dilemma or even help users. Yet they keep cropping up, defiant, weedy, impossible to eradicate. Where are they all coming from? Blame it on this: writing is hard. When generating content, most of us do whatever it takes to get some words on the screen. And the format of question and answer makes it easy: a reactionary first stab at content development. After all, the point of website content is to answer users’ questions. So this – to give everyone credit – is a really good move. Content creators who think in terms of questions and answers are actually thinking of their users, particularly first-time users, trying to anticipate their needs and write towards them. It’s a good start. But it’s scaffolding: writing that helps you get to the writing you’re supposed to be doing. It supports you while you write your way to the heart of your content. And once you get there, you have to look back and take the scaffolding down. Leaving content in the Q&A format that helped you develop it is missing the point. You’re not there to build scaffolding. You have to see your content in its naked purpose and determine the best method for communicating that purpose – and it usually won’t be what got you there. The goal (to borrow a lesson from content management systems) is to separate the content from its presentation, to let the meaning of the content inform its display. This is, of course, a nice theory. An occasionally necessary evil I have a lot of clients who adore FAQs. They’ve developed their content over a long period of time. They’ve listened to the questions their users are asking. And they’ve answered them all on a page that I simply cannot get them to part with. Which means I’ve had to consider that there may be occasions where an FAQ page is appropriate. As an example: one of my clients is a financial office in a large institution. Because this office manages several third-party systems that serve a range of niche audiences, they had developed FAQs that addressed hyper-specific instances of dysfunction within systems for different users – à la “I’m a financial director and my employee submitted an expense report in such-and-such system and it returned such-and-such error. What do I do?” Yes, this content could be removed from the question format and rewritten. But I’m not sure it would be an improvement. It won’t necessarily resolve concerns about length and searchability, and the different audiences may complicate the delivery. And since the work of rewriting it didn’t fit into the client workflow (small team, no writers, pressed for time), I didn’t recommend the change. I’ve had to make peace with not being to torch all the FAQs on the internet. Some content, like troubleshooting information or complex procedures, may be better in that format. It may be the smartest way for a particular client to handle that particular information. Of course, this has to be determined on a case-by-case basis, taking into account the amount of content, the subject matter, the skill levels of the content creators, the publishing workflow, and the search habits of the users. If you determine that an FAQ page is the only way to go, ask yourself: Is there a better label or more specific term for the page (support, troubleshooting, product concerns, etc.)? Is there way to structure the page, categorize the questions, or otherwise make it easier for users to navigate quickly to the answer they need? Is a question and answer format absolutely the best way to communicate this information? Form follows function Just as a question and answer format isn’t necessarily required to deliver the content, neither is it an inappropriate method in and of itself. Content professionals have developed a knee-jerk reaction: It’s an FAQ page! Quick, burn it! Buuuuurn it! But there’s no inherent evil in questions and answers. Framing content in an interrogatory construct is no more a deal with the devil than subheads and paragraphs, or narrative arcs, or bullet points. Yes, FAQs are riddled with communication snafus. They deserve, more often than not, to be tied to a chair and thrown into a lake. But that wouldn’t fix our content problems. FAQs are a shiny and obvious target for our frustration, but they’re not unique in their flaws. In any format, in any display, in any kind of page, weak content can rear its ugly, poorly written head. It’s not the Q&A that’s to blame, it’s bad content. Content without context will always fail users. That’s the real witch in our midst. 2013 Lisa Maria Martin lisamariamartin 2013-12-08T00:00:00+00:00 https://24ways.org/2013/what-to-do-with-faqs/ content
21 Keeping Parts of Your Codebase Private on GitHub Open source is brilliant, there’s no denying that, and GitHub has been instrumental in open source’s recent success. I’m a keen open-sourcerer myself, and I have a number of projects on GitHub. However, as great as sharing code is, we often want to keep some projects to ourselves. To this end, GitHub created private repositories which act like any other Git repository, only, well, private! A slightly less common issue, and one I’ve come up against myself, is the desire to only keep certain parts of a codebase private. A great example would be my site, CSS Wizardry; I want the code to be open source so that people can poke through and learn from it, but I want to keep any draft blog posts private until they are ready to go live. Thankfully, there is a very simple solution to this particular problem: using multiple remotes. Before we begin, it’s worth noting that you can actually build a GitHub Pages site from a private repo. You can keep the entire source private, but still have GitHub build and display a full Pages/Jekyll site. I do this with csswizardry.net. This post will deal with the more specific problem of keeping only certain parts of the codebase (branches) private, and expose parts of it as either an open source project, or a built GitHub Pages site. N.B. This post requires some basic Git knowledge. Adding your public remote Let’s assume you’re starting from scratch and you currently have no repos set up for your project. (If you do already have your public repo set up, skip to the “Adding your private remote” section.) So, we have a clean slate: nothing has been set up yet, we’re doing all of that now. On GitHub, create two repositories. For the sake of this article we shall call them site.com and private.site.com. Make the site.com repo public, and the private.site.com repo private (you will need a paid GitHub account). On your machine, create the site.com directory, in which your project will live. Do your initial work in there, commit some stuff — whatever you need to do. Now we need to link this local Git repo on your machine with the public repo (remote) on GitHub. We should all be used to this: $ git remote add origin git@github.com:[user]/site.com.git Here we are simply telling Git to add a remote called origin which lives at git@github.com:[user]/site.com.git. Simple stuff. Now we need to push our current branch (which will be master, unless you’ve explicitly changed it) to that remote: $ git push -u origin master Here we are telling Git to push our master branch to a corresponding master branch on the remote called origin, which we just added. The -u sets upstream tracking, which basically tells Git to always shuttle code on this branch between the local master branch and the master branch on the origin remote. Without upstream tracking, you would have to tell Git where to push code to (and pull it from) every time you ran the push or pull commands. This sets up a permanent bond, if you like. This is really simple stuff, stuff that you will probably have done a hundred times before as a Git user. Now to set up our private remote. Adding your private remote We’ve set up our public, open source repository on GitHub, and linked that to the repository on our machine. All of this code will be publicly viewable on GitHub.com. (Remember, GitHub is just a host of regular Git repositories, which also puts a nice GUI around it all.) We want to add the ability to keep certain parts of the codebase private. What we do now is add another remote repository to the same local repository. We have two repos on GitHub (site.com and private.site.com), but only one repository (and, therefore, one directory) on our machine. Two GitHub repos, and one local one. In your local repo, check out a new branch. For the sake of this article we shall call the branch dev. This branch might contain work in progress, or draft blog posts, or anything you don’t want to be made publicly viewable on GitHub.com. The contents of this branch will, in a moment, live in our private repository. $ git checkout -b dev We have now made a new branch called dev off the branch we were on last (master, unless you renamed it). Now we need to add our private remote (private.site.com) so that, in a second, we can send this branch to that remote: $ git remote add private git@github.com:[user]/private.site.com.git Like before, we are just telling Git to add a new remote to this repo, only this time we’ve called it private and it lives at git@github.com:[user]/private.site.com.git. We now have one local repo on our machine which has two remote repositories associated with it. Now we need to tell our dev branch to push to our private remote: $ git push -u private dev Here, as before, we are pushing some code to a repo. We are saying that we want to push the dev branch to the private remote, and, once again, we’ve set up upstream tracking. This means that, by default, the dev branch will only push and pull to and from the private remote (unless you ever explicitly state otherwise). Now you have two branches (master and dev respectively) that push to two remotes (origin and private respectively) which are public and private respectively. Any work we do on the master branch will push and pull to and from our publicly viewable remote, and any code on the dev branch will push and pull from our private, hidden remote. Adding more branches So far we’ve only looked at two branches pushing to two remotes, but this workflow can grow as much or as little as you’d like. Of course, you’d never do all your work in only two branches, so you might want to push any number of them to either your public or private remotes. Let’s imagine we want to create a branch to try something out real quickly: $ git checkout -b test Now, when we come to push this branch, we can choose which remote we send it to: $ git push -u private test This pushes the new test branch to our private remote (again, setting the persistent tracking with -u). You can have as many or as few remotes or branches as you like. Combining the two Let’s say you’ve been working on a new feature in private for a few days, and you’ve kept that on the private remote. You’ve now finalised the addition and want to move it into your public repo. This is just a simple merge. Check out your master branch: $ git checkout master Then merge in the branch that contained the feature: $ git merge dev Now master contains the commits that were made on dev and, once you’ve pushed master to its remote, those commits will be viewable publicly on GitHub: $ git push Note that we can just run $ git push on the master branch as we’d previously set up our upstream tracking (-u). Multiple machines So far this has covered working on just one machine; we had two GitHub remotes and one local repository. Let’s say you’ve got yourself a new Mac (yay!) and you want to clone an existing project: $ git clone git@github.com:[user]/site.com.git This will not clone any information about the remotes you had set up on the previous machine. Here you have a fresh clone of the public project and you will need to add the private remote to it again, as above. Done! If you’d like to see me blitz through all that in one go, check the showterm recording. The beauty of this is that we can still share our code, but we don’t have to develop quite so openly all of the time. Building a framework with a killer new feature? Keep it in a private branch until it’s ready for merge. Have a blog post in a Jekyll site that you’re not ready to make live? Keep it in a private drafts branch. Working on a new feature for your personal site? Tuck it away until it’s finished. Need a staging area for a Pages-powered site? Make a staging remote with its own custom domain. All this boils down to, really, is the fact that you can bring multiple remotes together into one local codebase on your machine. What you do with them is entirely up to you! 2013 Harry Roberts harryroberts 2013-12-09T00:00:00+00:00 https://24ways.org/2013/keeping-parts-of-your-codebase-private-on-github/ code
161 Keeping JavaScript Dependencies At Bay As we are writing more and more complex JavaScript applications we run into issues that have hitherto (god I love that word) not been an issue. The first decision we have to make is what to do when planning our app: one big massive JS file or a lot of smaller, specialised files separated by task. Personally, I tend to favour the latter, mainly because it allows you to work on components in parallel with other developers without lots of clashes in your version control. It also means that your application will be more lightweight as you only include components on demand. Starting with a global object This is why it is a good plan to start your app with one single object that also becomes the namespace for the whole application, say for example myAwesomeApp: var myAwesomeApp = {}; You can nest any necessary components into this one and also make sure that you check for dependencies like DOM support right up front. Adding the components The other thing to add to this main object is a components object, which defines all the components that are there and their file names. var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } } }; Technically you can also omit the loaded properties, but it is cleaner this way. The next thing to add is an addComponent function that can load your components on demand by adding new SCRIPT elements to the head of the documents when they are needed. var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } }, addComponent:function(component){ var c = this.components[component]; if(c && c.loaded === false){ var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src',c.url); document.getElementsByTagName('head')[0].appendChild(s); } } }; This allows you to add new components on the fly when they are not defined: if(!myAwesomeApp.components.gallery.loaded){ myAwesomeApp.addComponent('gallery'); }; Verifying that components have been loaded However, this is not safe as the file might not be available. To make the dynamic adding of components safer each of the components should have a callback at the end of them that notifies the main object that they indeed have been loaded: var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } }, addComponent:function(component){ var c = this.components[component]; if(c && c.loaded === false){ var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src',c.url); document.getElementsByTagName('head')[0].appendChild(s); } }, componentAvailable:function(component){ this.components[component].loaded = true; } } For example the gallery.js file should call this notification as a last line: myAwesomeApp.gallery = function(){ // [... other code ...] }(); myAwesomeApp.componentAvailable('gallery'); Telling the implementers when components are available The last thing to add (actually as a courtesy measure for debugging and implementers) is to offer a listener function that gets notified when the component has been loaded: var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } }, addComponent:function(component){ var c = this.components[component]; if(c && c.loaded === false){ var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src',c.url); document.getElementsByTagName('head')[0].appendChild(s); } }, componentAvailable:function(component){ this.components[component].loaded = true; if(this.listener){ this.listener(component); }; } }; This allows you to write a main listener function that acts when certain components have been loaded, for example: myAwesomeApp.listener = function(component){ if(component === 'gallery'){ showGallery(); } }; Extending with other components As the main object is public, other developers can extend the components object with own components and use the listener function to load dependent components. Say you have a bespoke component with data and labels in extra files: myAwesomeApp.listener = function(component){ if(component === 'bespokecomponent'){ myAwesomeApp.addComponent('bespokelabels'); }; if(component === 'bespokelabels'){ myAwesomeApp.addComponent('bespokedata'); }; if(component === 'bespokedata'){ myAwesomeApp,bespokecomponent.init(); }; }; myAwesomeApp.components.bespokecomponent = { url:'bespoke.js', loaded:false }; myAwesomeApp.components.bespokelabels = { url:'bespokelabels.js', loaded:false }; myAwesomeApp.components.bespokedata = { url:'bespokedata.js', loaded:false }; myAwesomeApp.addComponent('bespokecomponent'); Following this practice you can write pretty complex apps and still have full control over what is available when. You can also extend this to allow for CSS files to be added on demand. Influences If you like this idea and wondered if someone already uses it, take a look at the Yahoo! User Interface library, and especially at the YAHOO_config option of the global YAHOO.js object. 2007 Christian Heilmann chrisheilmann 2007-12-18T00:00:00+00:00 https://24ways.org/2007/keeping-javascript-dependencies-at-bay/ code
203 Jobs-to-Be-Done in Your UX Toolbox Part 1: What is JTBD? The concept of a “job” in “Jobs-To-Be-Done” is neatly encapsulated by a oft-quoted line from Theodore Levitt: “People want a quarter-inch hole, not a quarter inch drill”. Even so, Don Norman pointed out that perhaps Levitt “stopped too soon” at what the real customer goal might be. In the “The Design of Everyday Things”, he wrote: “Levitt’s example of the drill implying that the goal is really a hole is only partially correct, however. When people go to a store to buy a drill, that is not their real goal. But why would anyone want a quarter-inch hole? Clearly that is an intermediate goal. Perhaps they wanted to hang shelves on the wall. Levitt stopped too soon. Once you realize that they don’t really want the drill, you realize that perhaps they don’t really want the hole, either: they want to install their bookshelves. Why not develop methods that don’t require holes? Or perhaps books that don’t require bookshelves.” In other words, a “job” in JTBD lingo is a way to express a user need or provide a customer-centric problem frame that’s independent of a solution. As Tony Ulwick says: “A job is stable, it doesn’t change over time.” An example of a job is “tiding you over from breakfast to lunch.” You could hire a donut, a flapjack or a banana for that mid-morning snack—whatever does the job. If you can arrive at a clearly identified primary job (and likely some secondary ones too), you can be more creative in how you come up with an effective solution while keeping the customer problem in focus. The team at Intercom wrote a book on their application of JTBD. In it, Des Traynor cleverly characterised how JTBD provides a different way to think about solutions that compete for the same job: “Economy travel and business travel are both capable candidates applying for [the job: Get me face-to-face with my colleague in San Francisco], though they’re looking for significantly different salaries. Video conferencing isn’t as capable, but is willing to work for a far smaller salary. I’ve a hiring choice to make.” So far so good: it’s relatively simple to understand what a job is, once you understand how it’s different from a “task”. Business consultant and Harvard professor Clay Christensen talks about the concept of “hiring” a product to do a “job”, and firing it when something better comes along. If you’re a company that focuses solutions on the customer job, you’re more likely to succeed. You’ll find these concepts often referred to as “Jobs-to-be-Done theory”. But the application of Jobs-to-Be-Done theory is a little more complicated; it comprises several related approaches. I particularly like Jim Kalbach’s description of how JTBD is a “lens through which to understand value creation”. But it is also more. In my view, it’s a family of frameworks and methods—and perhaps even a philosophy. Different facets in a family of frameworks JTBD has its roots in market research and business strategy, and so it comes to the research table from a slightly different place compared to traditional UX or design research—we have our roots in human-computer interaction and ergonomics. I’ve found it helpful to keep in mind is that the application of JTBD theory is an evolving beast, so it’s common to find contradictions across different resources. My own use of it has varied from project to project. In speaking to others who have adopted it in different measures, it seems that we have all applied it in somewhat multifarious ways. As we like to often say in interviews: there are no wrong answers. Outcome Driven Innovation Tony Ulwick’s version of the JTBD history began with Outcome Driven Innovation (ODI), and this approach is best outlined in his seminal article published in the Harvard Business Review in 2002. To understand his more current JTBD approach in his new book “Jobs to Be Done: Theory to Practice”, I actually found it beneficial to read his approach in the original 2002 article for a clearer reference point. In the earlier article, Ulwick presented a rigorous approach that combines interviews, surveys and an “opportunity” algorithm—a sequence of steps to determine the business opportunity. ODI centres around working with “desired outcome statements” that you unearth through interviews, followed by a means to quantify the gap between importance and satisfaction in a survey to different types of customers. Since 2008, Ulwick has written about using job maps to make sense of what the customer may be trying to achieve. In a recent article, he describes the aim of the activity is “to discover what the customer is trying to get done at different points in executing a job and what must happen at each juncture in order for the job to be carried out successfully.” A job map is not strictly a journey map, however tempting it is to see it that way. From a UX perspective, is one of many models we can use—and as our research team at Clearleft have found, how we use model can depend on the nature of the jobs we’ve uncovered in interviews and the characteristics of the problem we’re attempting to solve. Figure 1. Universal job map Ulwick’s current methodology is outlined in his new book, where he describes a complete end-to-end process: from customer and competitor research to framing market and product strategy. The Jobs-To-Be-Done Interview Back in 2013, I attended a workshop by Chris Spiek and Bob Moesta from the ReWired Group on JTBD at the behest of a then-MailChimp colleague, and I came away excited about their approach to product research. It felt different from anything I’d done before and for the first time in years, I felt that I was genuinely adding something new to my research toolbox. A key idea is that if you focus on the stories of those who switched to you, and those who switch away from you, you can uncover the core jobs through looking at these opposite ends of engagement. This framework centres around the JTBD interview method, which harnesses the power of a narrative framework to elicit the real reasons why someone “hired” something to do a job—be it something physical like a new coffee maker, or a digital service, such as a to-do list app. As you interview, you are trying to unearth the context around the key moments on the JTBD timeline (Figure 2). A common approach is to begin from the point the customer might have purchased something, back to the point where the thought of buying this thing first occurred to them. Figure 2. JTBD Timeline Figure 3. The Four Forces The Forces Diagram (Figure 3) is a post-interview analysis tool where you can map out what causes customers to switch to something new and what holds them back. The JTBD interview is effective at identifying core and secondary jobs, as well as some context around the user need. Because this method is designed to extract the story from the interviewee, it’s a powerful way to facilitate recall. Having done many such interviews, I’ve noticed one interesting side effect: participants often remember more details later on after the conversation has formally ended. It is worth scheduling a follow-up phone call or keep the channels open. Strengths aside, it’s good to keep in mind that the JTBD interview is still primarily an interview technique, so you are relying on the context from the interviewee’s self-reported perspective. For example, a stronger research methodology combines JTBD interviews with contextual research and quantitative methods. Job Stories Alan Klement is credited for coming up with the term “job story” to describe the framing of jobs for product design by the team at Intercom: “When … I want to … so I can ….” Figure 4. Anatomy of a Job Story Unlike a user story that traditionally frames a requirement around personas, job stories frame the user need based on the situation and context. Paul Adams, the VP of Product at Intercom, wrote: “We frame every design problem in a Job, focusing on the triggering event or situation, the motivation and goal, and the intended outcome. […] We can map this Job to the mission and prioritise it appropriately. It ensures that we are constantly thinking about all four layers of design. We can see what components in our system are part of this Job and the necessary relationships and interactions required to facilitate it. We can design from the top down, moving through outcome, system, interactions, before getting to visual design.” Systems of Progress Apart from advocating using job stories, Klement believes that a core tenet of applying JTBD revolves around our desire for “self-betterment”—and that focusing on everyone’s desire for self-betterment is core to a successful strategy. In his book, Klement takes JTBD further to being a tool for change through applying systems thinking. There, he introduces the systems of progress and how it can help focus product strategy approach to be more innovative. Coincidentally, I applied similar thinking on mapping systemic change when we were looking to improve users’ trust with a local government forum earlier this year. It’s not just about capturing and satisfying the immediate job-to-be-done, it’s about framing the job so that you can a clear vision forward on how you can help your users improve their lives in the ways they want to. This is really the point where JTBD becomes a philosophy of practice. Part 2: Mixing It Up There has been some misunderstanding about how adopting JTBD means ditching personas or some of our existing design tools or research techniques. This couldn’t have been more wrong. Figure 5: Jim Kalbach’s JTBD model Jim Kalbach has used Outcome-Driven Innovation for around 10 years. In a 2016 article, he presents a synthesised model of how to think about that has key elements from ODI, Christensen’s theories and the structure of the job story. More interestingly, Kalbach has also combined the use of mental models with JTBD. Claire Menke of UDemy has written a comprehensive article about using personas, JTBD and customer journey maps together in order to communicate more complete story from the users’ perspective. Claire highlights an especially interesting point in her article as she described her challenges: “After much trial and error, I arrived at a foundational research framework to suit every team’s needs — allowing everyone to share the same holistic understanding, but extract the type of information most applicable to their work.” In other words, the organisational context you are in likely can dictate what works best—after all the goal is to arrive at the best user experience for your audiences. Intercom can afford to go full-on on applying JTBD theory as a dominant approach because they are a start-up, but a large company or organisation with multiple business units may require a mix of tools, outputs and outcomes. JTBD is an immensely powerful approach on many fronts—you’ll find many different references that lists the ways you can apply JTBD. However, in the context of this discussion, it might also be useful to we examine where it lies in our models of how we think about our UX and product processes. JTBD in the UX ecosystem Figure 6. The Elements of User Experience (source) There are many ways we have tried to explain the UX discipline but I think Jesse James Garrett’s Elements of User Experience is a good place to begin. I sometimes also use little diagram to help me describe the different levels you might work at when you work through the complexity of designing and developing a product. A holistic UX strategy needs to address all the different levels for a comprehensive experience: your individual product UI, product features, product propositions and brand need to have a cohesive definition. Figure 7. Which level of product focus? We could, of course, also think about where it fits best within the double diamond. Again, bearing in mind that JTBD has its roots in business strategy and market research, it is excellent at clarifying user needs, defining high-level specifications and content requirements. It is excellent for validating brand perception and value proposition —all the way down to your feature set. In other words, it can be extremely powerful all the way through to halfway of the second diamond. You could quite readily combine the different JTBD approaches because they have differences as much as overlaps. However, JTBD generally starts getting a little difficult to apply once we get to the details of UI design. The clue lies in JTBD’s raison d’être: a job statement is solution independent. Hence, once we get to designing solutions, we potentially fall into a existential black hole. That said, Jim Kalbach has a quick case study on applying JTBD to content design tucked inside the main article on a synthesised JTBD model. Alan Klement has a great example of how you could use UI to resolve job stories. You’ll notice that the available language of “jobs” drops off at around that point. Job statements and outcome statements provide excellent “mini north-stars” as customer-oriented focal points, but purely satisfying these statements would not necessarily guarantee that you have created a seamless and painless user experience. Playing well with others You will find that JTBD plays well with Lean, and other strategy tools like the Value Proposition Canvas. With every new project, there is potential to harness the power of JTBD alongside our established toolbox. When we need to understand complex contexts where cultural or socioeconomic considerations have to be taken into account, we are better placed with combining JTBD with more anthropological approaches. And while we might be able to evaluate if our product, website or app satisfies the customer jobs through interviews or surveys, without good old-fashioned usability testing we are unlikely to be able to truly validate why the job isn’t being represented as it should. In this case, individual jobs solved on the UI can be set up as hypotheses to be proven right or wrong. The application of Jobs-to-be-Done is still evolving. I’ve found it to be very powerful and I struggle to remember what my UX professional life was like before I encountered it—it has completely changed my approach to research and design. The fact JTBD is still evolving as a practice means we need to be watchful of dogma—there’s no right way to get a UX job done after all, it nearly always depends. At the end of the day, isn’t it about having the right tool for the right job? 2017 Steph Troeth stephtroeth 2017-12-04T00:00:00+00:00 https://24ways.org/2017/jobs-to-be-done-in-your-ux-toolbox/ ux
11 JavaScript: Taking Off the Training Wheels JavaScript is the third pillar of front-end web development. Of those pillars, it is both the most powerful and the most complex, so it’s understandable that when 24 ways asked, “What one thing do you wish you had more time to learn about?”, a number of you answered “JavaScript!” This article aims to help you feel happy writing JavaScript, and maybe even without libraries like jQuery. I can’t comprehensively explain JavaScript itself without writing a book, but I hope this serves as a springboard from which you can jump to other great resources. Why learn JavaScript? So what’s in it for you? Why take the next step and learn the fundamentals? Confidence with jQuery If nothing else, learning JavaScript will improve your jQuery code; you’ll be comfortable writing jQuery from scratch and feel happy bending others’ code to your own purposes. Writing efficient, fast and bug-free jQuery is also made much easier when you have a good appreciation of JavaScript, because you can look at what jQuery is really doing. Understanding how JavaScript works lets you write better jQuery because you know what it’s doing behind the scenes. When you need to leave the beaten track, you can do so with confidence. In fact, you could say that jQuery’s ultimate goal is not to exist: it was invented at a time when web APIs were very inconsistent and hard to work with. That’s slowly changing as new APIs are introduced, and hopefully there will come a time when jQuery isn’t needed. An example of one such change is the introduction of the very useful document.querySelectorAll. Like jQuery, it converts a CSS selector into a list of matching elements. Here’s a comparison of some jQuery code and the equivalent without. $('.counter').each(function (index) { $(this).text(index + 1); }); var counters = document.querySelectorAll('.counter'); [].slice.call(counters).forEach(function (elem, index) { elem.textContent = index + 1; }); Solving problems no one else has! When you have to go to the internet to solve a problem, you’re forever stuck reusing code other people wrote to solve a slightly different problem to your own. Learning JavaScript will allow you to solve problems in your own way, and begin to do things nobody else ever has. Node.js Node.js is a non-browser environment for running JavaScript, and it can do just about anything! But if that sounds daunting, don’t worry: the Node community is thriving, very friendly and willing to help. I think Node is incredibly exciting. It enables you, with one language, to build complete websites with complex and feature-filled front- and back-ends. Projects that let users log in or need a database are within your grasp, and Node has a great ecosystem of library authors to help you build incredible things. Exciting! Here’s an example web server written with Node. http is a module that allows you to create servers and, like jQuery’s $.ajax, make requests. It’s a small amount of code to do something complex and, while working with Node is different from writing front-end code, it’s certainly not out of your reach. var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World'); }).listen(1337); console.log('Server running at http://localhost:1337/'); Grunt and other website tools Node has brought in something of a renaissance in tools that run in the command line, like Yeoman and Grunt. Both of these rely heavily on Node, and I’ll talk a little bit about Grunt here. Grunt is a task runner, and many people use it for compiling Sass or compressing their site’s JavaScript and images. It’s pretty cool. You configure Grunt via the gruntfile.js, so JavaScript skills will come in handy, and since Grunt supports plug-ins built with JavaScript, knowing it unlocks the bucketloads of power Grunt has to offer. Ways to improve your skills So you know you want to learn JavaScript, but what are some good ways to learn and improve? I think the answer to that is different for different people, but here are some ideas. Rebuild a jQuery app Converting a jQuery project to non-jQuery code is a great way to explore how you modify elements on the page and make requests to the server for data. My advice is to focus on making it work in one modern browser initially, and then go cross-browser if you’re feeling adventurous. There are many resources for directly comparing jQuery and non-jQuery code, like Jeffrey Way’s jQuery to JavaScript article. Find a mentor If you think you’d work better on a one-to-one basis then finding yourself a mentor could be a brilliant way to learn. The JavaScript community is very friendly and many people will be more than happy to give you their time. I’d look out for someone who’s active and friendly on Twitter, and does the kind of work you’d like to do. Introduce yourself over Twitter or send them an email. I wouldn’t expect a full tutoring course (although that is another option!) but they’ll be very glad to answer a question and any follow-ups every now and then. Go to a workshop Many conferences and local meet-ups run workshops, hosted by experts in a particular field. See if there’s one in your area. Workshops are great because you can ask direct questions, and you’re in an environment where others are learning just like you are — no need to learn alone! Set yourself challenges This is one way I like to learn new things. I have a new thing that I’m not very good at, so I pick something that I think is just out of my reach and I try to build it. It’s learning by doing and, even if you fail, it can be enormously valuable. Where to start? If you’ve decided learning JavaScript is an important step for you, your next question may well be where to go from here. I’ve collected some links to resources I know of or use, with some discussion about why you might want to check a particular site out. I hope this serves as a springboard for you to go out and learn as much as you want. Beginner If you’re just getting started with JavaScript, I’d recommend heading to one of these places. They cover the basics and, in some cases, a little more advanced stuff. They’re all reputable sources (although, I’ve included something I wrote — you can decide about that one!) and will not lead you astray. jQuery’s JavaScript 101 is a great first resource for JavaScript that will give you everything you need to work with jQuery like a pro. Codecademy’s JavaScript Track is a small but useful JavaScript course. If you like learning interactively, this could be one for you. HTMLDog’s JavaScript Tutorials take you right through from the basics of code to a brief introduction to newer technology like Node and Angular. [Disclaimer: I wrote this stuff, so it comes with a hazard warning!] The tuts+ jQuery to JavaScript mentioned earlier is great for seeing how jQuery code looks when converted to pure JavaScript. Getting in-depth For more comprehensive documentation and help I’d recommend adding these places to your list of go-tos. MDN: the Mozilla Developer Network is the first place I go for many JavaScript questions. I mostly find myself there via a search, but it’s a great place to just go and browse. Axel Rauschmayer’s 2ality is a stunning collection of articles that will take you deep into JavaScript. It’s certainly worth looking at. Addy Osmani’s JavaScript Design Patterns is a comprehensive collection of patterns for writing high quality JavaScript, particularly as you (I hope) start to write bigger and more complex applications. And finally… I think the key to learning anything is curiosity and perseverance. If you have a question, go out and search for the answer, even if you have no idea where to start. Keep going and going and eventually you’ll get there. I bet you’ll learn a whole lot along the way. Good luck! Many thanks to the people who gave me their time when I was working on this article: Tom Oakley, Jack Franklin, Ben Howdle and Laura Kalbag. 2013 Tom Ashworth tomashworth 2013-12-05T00:00:00+00:00 https://24ways.org/2013/javascript-taking-off-the-training-wheels/ code
37 JavaScript Modules the ES6 Way JavaScript admittedly has plenty of flaws, but one of the largest and most prominent is the lack of a module system: a way to split up your application into a series of smaller files that can depend on each other to function correctly. This is something nearly all other languages come with out of the box, whether it be Ruby’s require, Python’s import, or any other language you’re familiar with. Even CSS has @import! JavaScript has nothing of that sort, and this has caused problems for application developers as they go from working with small websites to full client-side applications. Let’s be clear: it doesn’t mean the new module system in the upcoming version of JavaScript won’t be useful to you if you’re building smaller websites rather than the next Instagram. Thankfully, the lack of a module system will soon be a problem of the past. The next version of JavaScript, ECMAScript 6, will bring with it a full-featured module and dependency management solution for JavaScript. The bad news is that it won’t be landing in browsers for a while yet – but the good news is that the specification for the module system and how it will look has been finalised. The even better news is that there are tools available to get it all working in browsers today without too much hassle. In this post I’d like to give you the gift of JS modules and show you the syntax, and how to use them in browsers today. It’s much simpler than you might think. What is ES6? ECMAScript is a scripting language that is standardised by a company called Ecma International. JavaScript is an implementation of ECMAScript. ECMAScript 6 is simply the next version of the ECMAScript standard and, hence, the next version of JavaScript. The spec aims to be fully comfirmed and complete by the end of 2014, with a target initial release date of June 2015. It’s impossible to know when we will have full feature support across the most popular browsers, but already some ES6 features are landing in the latest builds of Chrome and Firefox. You shouldn’t expect to be able to use the new features across browsers without some form of additional tooling or library for a while yet. The ES6 module spec The ES6 module spec was fully confirmed in July 2014, so all the syntax I will show you in this article is not expected to change. I’ll first show you the syntax and the new APIs being added to the language, and then look at how to use them today. There are two parts to the new module system. The first is the syntax for declaring modules and dependencies in your JS files, and the second is a programmatic API for loading in modules manually. The first is what most people are expected to use most of the time, so it’s what I’ll focus on more. Module syntax The key thing to understand here is that modules have two key components. First, they have dependencies. These are things that the module you are writing depends on to function correctly. For example, if you were building a carousel module that used jQuery, you would say that jQuery is a dependency of your carousel. You import these dependencies into your module, and we’ll see how to do that in a minute. Second, modules have exports. These are the functions or variables that your module exposes publicly to anything that imports it. Using jQuery as the example again, you could say that jQuery exports the $ function. Modules that depend on and hence import jQuery get access to the $ function, because jQuery exports it. Another important thing to note is that when I discuss a module, all I really mean is a JavaScript file. There’s no extra syntax to use other than the new ES6 syntax. Once ES6 lands, modules and files will be analogous. Named exports Modules can export multiple objects, which can be either plain old variables or JavaScript functions. You denote something to be exported with the export keyword: export function double(x) { return x + x; }; You can also store something in a variable then export it. If you do that, you have to wrap the variable in a set of curly braces. var double = function(x) { return x + x; } export { double }; A module can then import the double function like so: import { double } from 'mymodule'; double(2); // 4 Again, curly braces are required around the variable you would like to import. It’s also important to note that from 'mymodule' will look for a file called mymodule.js in the same directory as the file you are requesting the import from. There is no need to add the .js extension. The reason for those extra braces is that this syntax lets you export multiple variables: var double = function(x) { return x + x; } var square = function(x) { return x * x; } export { double, square } I personally prefer this syntax over the export function …, but only because it makes it much clearer to me what the module exports. Typically I will have my export {…} line at the bottom of the file, which means I can quickly look in one place to determine what the module is exporting. A file importing both double and square can do so in just the way you’d expect: import { double, square } from 'mymodule'; double(2); // 4 square(3); // 9 With this approach you can’t easily import an entire module and all its methods. This is by design – it’s much better and you’re encouraged to import just the functions you need to use. Default exports Along with named exports, the system also lets a module have a default export. This is useful when you are working with a large library such as jQuery, Underscore, Backbone and others, and just want to import the entire library. A module can define its default export (it can only ever have one default export) like so: export default function(x) { return x + x; } And that can be imported: import double from 'mymodule'; double(2); // 4 This time you do not use the curly braces around the name of the object you are importing. Also notice how you can name the import whatever you’d like. Default exports are not named, so you can import them as anything you like: import christmas from 'mymodule'; christmas(2); // 4 The above is entirely valid. Although it’s not something that is used too often, a module can have both named exports and a default export, if you wish. One of the design goals of the ES6 modules spec was to favour default exports. There are many reasons behind this, and there is a very detailed discussion on the ES Discuss site about it. That said, if you find yourself preferring named exports, that’s fine, and you shouldn’t change that to meet the preferences of those designing the spec. Programmatic API Along with the syntax above, there is also a new API being added to the language so you can programmatically import modules. It’s pretty rare you would use this, but one obvious example is loading a module conditionally based on some variable or property. You could easily import a polyfill, for example, if the user’s browser didn’t support a feature your app relied on. An example of doing this is: if(someFeatureNotSupported) { System.import('my-polyfill').then(function(myPolyFill) { // use the module from here }); } System.import will return a promise, which, if you’re not familiar, you can read about in this excellent article on HTMl5 Rocks by Jake Archibald. A promise basically lets you attach callback functions that are run when the asynchronous operation (in this case, System.import), is complete. This programmatic API opens up a lot of possibilities and will also provide hooks to allow you to register callbacks that will run at certain points in the lifetime of a module. Those hooks and that syntax are slightly less set in stone, but when they are confirmed they will provide really useful functionality. For example, you could write code that would run every module that you import through something like JSHint before importing it. In development that would provide you with an easy way to keep your code quality high without having to run a command line watch task. How to use it today It’s all well and good having this new syntax, but right now it won’t work in any browser – and it’s not likely to for a long time. Maybe in next year’s 24 ways there will be an article on how you can use ES6 modules with no extra work in the browser, but for now we’re stuck with a bit of extra work. ES6 module transpiler One solution is to use the ES6 module transpiler, a compiler that lets you write your JavaScript using the ES6 module syntax (actually a subset of it – not quite everything is supported, but the main features are) and have it compiled into either CommonJS-style code (CommonJS is the module specification that NodeJS and Browserify use), or into AMD-style code (the spec RequireJS uses). There are also plugins for all the popular build tools, including Grunt and Gulp. The advantage of using this transpiler is that if you are already using a tool like RequireJS or Browserify, you can drop the transpiler in, start writing in ES6 and not worry about any additional work to make the code work in the browser, because you should already have that set up already. If you don’t have any system in place for handling modules in the browser, using the transpiler doesn’t really make sense. Remember, all this does is convert ES6 module code into CommonJS- or AMD-compliant JavaScript. It doesn’t do anything to help you get that code running in the browser, but if you have that part sorted it’s a really nice addition to your workflow. If you would like a tutorial on how to do this, I wrote a post back in June 2014 on using ES6 with the ES6 module transpiler. SystemJS Another solution is SystemJS. It’s the best solution in my opinion, particularly if you are starting a new project from scratch, or want to use ES6 modules on a project where you have no current module system in place. SystemJS is a spec-compliant universal module loader: it loads ES6 modules, AMD modules, CommonJS modules, as well as modules that just add a variable to the global scope (window, in the browser). To load in ES6 files, SystemJS also depends on two other libraries: the ES6 module loader polyfill; and Traceur. Traceur is best accessed through the bower-traceur package, as the main repository doesn’t have an easy to find downloadable version. The ES6 module load polyfill implements System.import, and lets you load in files using it. Traceur is an ES6-to-ES5 module loader. It takes code written in ES6, the newest version of JavaScript, and transpiles it into ES5, the version of JavaScript widely implemented in browsers. The advantage of this is that you can play with the new features of the language today, even though they are not supported in browsers. The drawback is that you have to run all your files through Traceur every time you save them, but this is easily automated. Additionally, if you use SystemJS, the Traceur compilation is done automatically for you. All you need to do to get SystemJS running is to add a <script> element to load SystemJS into your webpage. It will then automatically load the ES6 module loader and Traceur files when it needs them. In your HTML you then need to use System.import to load in your module: <script> System.import('./app'); </script> When you load the page, app.js will be asynchronously loaded. Within app.js, you can now use ES6 modules. SystemJS will detect that the file is an ES6 file, automatically load Traceur, and compile the file into ES5 so that it works in the browser. It does all this dynamically in the browser, but there are tools to bundle your application in production, so it doesn’t make a lot of requests on the live site. In development though, it makes for a really nice workflow. When working with SystemJS and modules in general, the best approach is to have a main module (in our case app.js) that is the main entry point for your application. app.js should then be responsible for loading all your application’s modules. This forces you to keep your application organised by only loading one file initially, and having the rest dealt with by that file. SystemJS also provides a workflow for bundling your application together into one file. Conclusion ES6 modules may be at least six months to a year away (if not more) but that doesn’t mean they can’t be used today. Although there is an overhead to using them now – with the work required to set up SystemJS, the module transpiler, or another solution – that doesn’t mean it’s not worthwhile. Using any module system in the browser, whether that be RequireJS, Browserify or another alternative, requires extra tooling and libraries to support it, and I would argue that the effort to set up SystemJS is no greater than that required to configure any other tool. It also comes with the extra benefit that when the syntax is supported in browsers, you get a free upgrade. You’ll be able to remove SystemJS and have everything continue to work, backed by the native browser solution. If you are starting a new project, I would strongly advocate using ES6 modules. It is a syntax and specification that is not going away at all, and will soon be supported in browsers. Investing time in learning it now will pay off hugely further down the road. Further reading If you’d like to delve further into ES6 modules (or ES6 generally) and using them today, I recommend the following resources: ECMAScript 6 modules: the final syntax by Axel Rauschmayer Practical Workflows for ES6 Modules by Guy Bedford ECMAScript 6 resources for the curious JavaScripter by Addy Osmani Tracking ES6 support by Addy Osmani ES6 Tools List by Addy Osmani Using Grunt and the ES6 Module Transpiler by Thomas Boyt JavaScript Modules and Dependencies with jspm by myself Using ES6 Modules Today by Guy Bedford 2014 Jack Franklin jackfranklin 2014-12-03T00:00:00+00:00 https://24ways.org/2014/javascript-modules-the-es6-way/ code
153 JavaScript Internationalisation or: Why Rudolph Is More Than Just a Shiny Nose Dunder sat, glumly staring at the computer screen. “What’s up, Dunder?” asked Rudolph, entering the stable and shaking off the snow from his antlers. “Well,” Dunder replied, “I’ve just finished coding the new reindeer intranet Santa Claus asked me to do. You know how he likes to appear to be at the cutting edge, talking incessantly about Web 2.0, AJAX, rounded corners; he even spooked Comet recently by talking about him as if he were some pushy web server. “I’ve managed to keep him happy, whilst also keeping it usable, accessible, and gleaming — and I’m still on the back row of the sleigh! But anyway, given the elves will be the ones using the site, and they come from all over the world, the site is in multiple languages. Which is great, except when it comes to the preview JavaScript I’ve written for the reindeer order form. Here, have a look…” As he said that, he brought up the textileRef:8234272265470b85d91702:linkStartMarker:“order form in French”:/examples/javascript-internationalisation/initial.fr.html on the screen. (Same in English). “Looks good,” said Rudolph. “But if I add some items,” said Dunder, “the preview appears in English, as it’s hard-coded in the JavaScript. I don’t want separate code for each language, as that’s just silly — I thought about just having if statements, but that doesn’t scale at all…” “And there’s more, you aren’t displaying large numbers in French properly, either,” added Rudolph, who had been playing and looking at part of the source code: function update_text() { var hay = getValue('hay'); var carrots = getValue('carrots'); var bells = getValue('bells'); var total = 50 * bells + 30 * hay + 10 * carrots; var out = 'You are ordering ' + pretty_num(hay) + ' bushel' + pluralise(hay) + ' of hay, ' + pretty_num(carrots) + ' carrot' + pluralise(carrots) + ', and ' + pretty_num(bells) + ' shiny bell' + pluralise(bells) + ', at a total cost of <strong>' + pretty_num(total) + '</strong> gold pieces. Thank you.'; document.getElementById('preview').innerHTML = out; } function pretty_num(n) { n += ''; var o = ''; for (i=n.length; i>3; i-=3) { o = ',' + n.slice(i-3, i) + o; } o = n.slice(0, i) + o; return o; } function pluralise(n) { if (n!=1) return 's'; return ''; } “Oh, botheration!” cried Dunder. “This is just so complicated.” “It doesn’t have to be,” said Rudolph, “you just have to think about things in a slightly different way from what you’re used to. As we’re only a simple example, we won’t be able to cover all possibilities, but for starters, we need some way of providing different information to the script dependent on the language. We’ll create a global i18n object, say, and fill it with the correct language information. The first variable we’ll need will be a thousands separator, and then we can change the pretty_num function to use that instead: function pretty_num(n) { n += ''; var o = ''; for (i=n.length; i>3; i-=3) { o = i18n.thousands_sep + n.slice(i-3, i) + o; } o = n.slice(0, i) + o; return o; } “The i18n object will also contain our translations, which we will access through a function called _() — that’s just an underscore. Other languages have a function of the same name doing the same thing. It’s very simple: function _(s) { if (typeof(i18n)!='undefined' && i18n[s]) { return i18n[s]; } return s; } “So if a translation is available and provided, we’ll use that; otherwise we’ll default to the string provided — which is helpful if the translation begins to lag behind the site’s text at all, as at least something will be output.” “Got it,” said Dunder. “ _('Hello Dunder') will print the translation of that string, if one exists, ‘Hello Dunder’ if not.” “Exactly. Moving on, your plural function breaks even in English if we have a word where the plural doesn’t add an s — like ‘children’.” “You’re right,” said Dunder. “How did I miss that?” “No harm done. Better to provide both singular and plural words to the function and let it decide which to use, performing any translation as well: function pluralise(s, p, n) { if (n != 1) return _(p); return _(s); } “We’d have to provide different functions for different languages as we employed more elves and got more complicated — for example, in Polish, the word ‘file’ pluralises like this: 1 plik, 2-4 pliki, 5-21 plików, 22-24 pliki, 25-31 plików, and so on.” (More information on plural forms) “Gosh!” “Next, as different languages have different word orders, we must stop using concatenation to construct sentences, as it would be impossible for other languages to fit in; we have to keep coherent strings together. Let’s rewrite your update function, and then go through it: function update_text() { var hay = getValue('hay'); var carrots = getValue('carrots'); var bells = getValue('bells'); var total = 50 * bells + 30 * hay + 10 * carrots; hay = sprintf(pluralise('%s bushel of hay', '%s bushels of hay', hay), pretty_num(hay)); carrots = sprintf(pluralise('%s carrot', '%s carrots', carrots), pretty_num(carrots)); bells = sprintf(pluralise('%s shiny bell', '%s shiny bells', bells), pretty_num(bells)); var list = sprintf(_('%s, %s, and %s'), hay, carrots, bells); var out = sprintf(_('You are ordering %s, at a total cost of <strong>%s</strong> gold pieces.'), list, pretty_num(total)); out += ' '; out += _('Thank you.'); document.getElementById('preview').innerHTML = out; } “ sprintf is a function in many other languages that, given a format string and some variables, slots the variables into place within the string. JavaScript doesn’t have such a function, so we’ll write our own. Again, keep it simple for now, only integers and strings; I’m sure more complete ones can be found on the internet. function sprintf(s) { var bits = s.split('%'); var out = bits[0]; var re = /^([ds])(.*)$/; for (var i=1; i<bits.length; i++) { p = re.exec(bits[i]); if (!p || arguments[i]==null) continue; if (p[1] == 'd') { out += parseInt(arguments[i], 10); } else if (p[1] == 's') { out += arguments[i]; } out += p[2]; } return out; } “Lastly, we need to create one file for each language, containing our i18n object, and then include that from the relevant HTML. Here’s what a blank translation file would look like for your order form: var i18n = { thousands_sep: ',', "%s bushel of hay": '', "%s bushels of hay": '', "%s carrot": '', "%s carrots": '', "%s shiny bell": '', "%s shiny bells": '', "%s, %s, and %s": '', "You are ordering %s, at a total cost of <strong>%s</strong> gold pieces.": '', "Thank you.": '' }; “If you implement this across the intranet, you’ll want to investigate the xgettext program, which can automatically extract all strings that need translating from all sorts of code files into a standard .po file (I think Python mode works best for JavaScript). You can then use a different program to take the translated .po file and automatically create the language-specific JavaScript files for us.” (e.g. German .po file for PledgeBank, mySociety’s .po-.js script, example output) With a flourish, Rudolph finished editing. “And there we go, localised JavaScript in English, French, or German, all using the same main code.” “Thanks so much, Rudolph!” said Dunder. “I’m not just a pretty nose!” Rudolph quipped. “Oh, and one last thing — please comment liberally explaining the context of strings you use. Your translator will thank you, probably at the same time as they point out the four hundred places you’ve done something in code that only works in your language and no-one else’s…” Thanks to Tim Morley and Edmund Grimley Evans for the French and German translations respectively. 2007 Matthew Somerville matthewsomerville 2007-12-08T00:00:00+00:00 https://24ways.org/2007/javascript-internationalisation/ code
241 Jank-Free Image Loads There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then — Your browser doesn’t support HTML5 video. Here is a link to the video instead. — oops! — an image pops in above it, pushing said text down the page, and our poor reader loses their place. By default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I’ll chart a path into a jank-free future – one in which it’s easy and natural to author <img> elements that load like this: Your browser doesn’t support HTML5 video. Here is a link to the video instead. Jank is a very old problem, and there is a very old solution to it: the width and height attributes on <img>. The idea is: if we stick an image’s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives. width Specifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network. —The HTML 3.2 Specification, published on January 14 1997 Unfortunately for us, when width and height were first spec’d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird: See the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen. width and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths. I have good news, bad news, and great news. The good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them. The bad news is that these techniques are all terrible, cumbersome hacks. They’re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways. So, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform. aspect-ratio in CSS The first proposed feature? An aspect-ratio property in CSS! This would allow us to write CSS like this: img { width: 100%; } .thumb { aspect-ratio: 1/1; } .hero { aspect-ratio: 16/9; } This’ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above. Alas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It’s images – possibly user generated images – that can have any aspect ratio. The really tricky problem is unknown-when-you’re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles: <img src="image.jpg" style="aspect-ratio: 5/4" /> And inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style="" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I’m cutting off my, (or anyone else’s) ability to change those aspect ratios, for whatever reason, later. How might we specify aspect ratios at a lower level? How might we give browsers information about an image’s dimensions, without giving them explicit instructions about how to style it? I’ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio! A brief note on intrinsic and extrinsic sizing What do I mean by “intrinsic” and “extrinsic?” The intrinsic size of an image is, put simply, how big it’d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800×600 image has an intrinsic width of 800px. The extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800×600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn’t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px. It surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity). CSS aspect-ratio lets us avoid setting extrinsic heights and widths – and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it. The last tool I’m going to talk about gets us out of the extrinsic sizing game all together — which, I think, is only appropriate for a feature that we’re going to be using in HTML. intrinsicsize in HTML The proposed intrinsicsize attribute will let you do this: <img src="image.jpg" intrinsicsize="800x600" /> That tells the browser, “hey, this image.jpg that I’m using here – I know you haven’t loaded it yet but I’m just going to let you know right away that it’s going to have an intrinsic size of 800×600.” This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image’s intrinsic size. You may ask (I did!): wait, what if my <img> references multiple resources, which all have different intrinsic sizes? Well, if you’re using srcset, intrinsicsize is a bit of a misnomer – what the attribute will do then, is specify an intrinsic aspect ratio: <img srcset="300x200.jpg 300w, 600x400.jpg 600w, 900x600.jpg 900w, 1200x800.jpg 1200w" sizes="75vw" intrinsicsize="3x2" /> In the future (and behind the “Experimental Web Platform Features” flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 – it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2. Can’t wait I seem to have gotten myself into the weeds a bit. Sizing on the web is complicated! Don’t let all of these details bury the big takeaway here: sometime soon (🤞 2019‽ 🤞), we’ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can’t wait! 2018 Eric Portis ericportis 2018-12-21T00:00:00+00:00 https://24ways.org/2018/jank-free-image-loads/ code
244 It’s Beginning to Look a Lot Like XSSmas I dread the office Secret Santa. I have a knack for choosing well-meaning but inappropriate presents, like a bottle of port for a teetotaller, a cheese-tasting experience for a vegan, or heaven forbid, Spurs socks for an Arsenal supporter. Ok, the last one was intentional. It’s the same with gifting code. Once, I made a pattern library for A List Apart which I open sourced, and a few weeks later, a glaring security vulnerability was found in it. My gift was so generous that it enabled unrestricted access to any file on any public-facing server that hosted it. With platforms like GitHub and npm, giving the gift of code is so easy it’s practically a no-brainer. This giant, open source yankee swap helps us do our jobs without starting from scratch with every project. But like any gift-giving, it’s also risky. Vulnerabilities and Open Source Open source code is not inherently more or less vulnerable than closed-source code. What makes it higher risk is that the same piece of code gets reused in lots of places, meaning a hacker can use the same exploit mechanism on the same vulnerable code in different apps. Graph showing the number of open source vulnerabilities published per year, from the State of Open Source Security 2017 In the first 24 ways article this year, Katie referenced a few different types of vulnerability: Cross-site Request Forgery (also known as CSRF) SQL Injection Cross-site Scripting (also known as XSS) There are many more types of vulnerability, and those that live under the same category share similarities. For example, my favourite – is it weird to have a favourite vulnerability? – is Cross Site Scripting (XSS), which allows for the injection of scripts into web pages. This is a really common vulnerability often unwittingly added by developers. OWASP (the Open Web Application Security Project) wrote a great article about how to prevent opening the door to XSS attacks – share it generously with your colleagues. Most vulnerabilities like this are not added intentionally – they’re doors left ajar due to the way something has been scripted, like the over-generous code in my pattern library. Others, though, are added intentionally. A few months ago, a hacker, disguised as a helpful elf, offered to take over the maintenance of a popular npm package that had been unmaintained for a couple of years. The owner had moved onto other projects, and was keen to see it continue to be maintained by someone else, so transferred ownership. Fast-forward 3 months, it was discovered that the individual had quietly added a malicious package to the codebase, and the obfuscated code in it had been unwittingly installed onto thousands of apps. The code added was designed to harvest Bitcoin if it was run alongside another application. It was only spotted due to a developer’s curiosity. Another tactic to get developers to unwittingly install malicious packages into their codebase is “typosquatting” – back in August last year, npm reported that a user had been publishing packages with very similar names to popular packages (for example, crossenv instead of cross-env). This is a big wakeup call for open source maintainers. Techniques like this are likely to be used more as the maintenance of open source libraries becomes an increasing burden to their owners. After all, starting a new project often has a greater reward than maintaining an existing one, but remember, an open source library is for life, not just for Christmas. Santa’s on his sleigh If you use open source libraries, chances are that these libraries also use open source libraries. Your app may only have a handful of dependencies, but tucked in the back of that sleigh may be a whole extra sack of dependencies known as deep dependencies (ones that you didn’t directly install, but are dependencies of that dependency), and these can contain vulnerabilities too. Let’s look at the npm package santa as an example. santa has 8 direct dependencies listed on npm. That seems pretty manageable. But that’s just the tip of the iceberg – have a look at the full dependency tree which contains 109 dependencies – more dependencies than there are Christmas puns in this article. Only one of these direct dependencies has a vulnerability (at the time of writing), but there are actually 13 other known vulnerabilities in santa, which have been introduced through its deeper dependencies. Fixing vulnerabilities – the ultimate christmas gift If you’re a maintainer of open source libraries, taking good care of them is the ultimate gift you can give. Keep your dependencies up to date, use a security tool to monitor and alert you when new vulnerabilities are found in your code, and fix or patch them promptly. This will help keep the whole open source ecosystem healthy. When you find out about a new vulnerability, you have some options: Fix the vulnerability via an upgrade You can often fix a vulnerability by upgrading the library to the latest version. Make sure you’re using software that monitors your dependencies for new security issues and lets you know when a fix is ready, otherwise you may be unwittingly using a vulnerable version. Patch the vulnerable code Sometimes, a fix for a vulnerable library isn’t possible. This is often the case when a library is no longer being maintained, or the version of the library being used might be so out of date that upgrading it would cause a breaking change. Patches are bits of code that will fix that particular issue, but won’t change anything else. Switch to a different library If the library you’re using has no fix or patch, you may be better of switching it out for another one, particularly if it looks like it’s being unmaintained. Responsibly disclosing vulnerabilities Knowing how to responsibly disclose vulnerabilities is something I’m ashamed to admit that I didn’t know about before I joined a security company. But it’s so important! On discovering a new vulnerability, a developer has a few options: A malicious developer will exploit that vulnerability for their own gain. A reckless (or inexperienced) developer will disclose that vulnerability to the world without following a responsible disclosure process. This opens the door to an unethical developer exploiting the vulnerability. At Snyk, we monitor social media for mentions of newly found vulnerabilities so we can add them to our database and share fixes before they get exploited. An ethical and aware developer will follow what’s known as a “responsible disclosure process”. They will contact the maintainer of the code privately, allowing reasonable time for them to release a fix for the issue and to give others who use that vulnerable code a chance to fix it too. It’s important to understand this process if you’re a maintainer or contributor of code. It can be daunting when a report comes in, but understanding and following the right steps will help reduce the risk to the people who use that code. So what does responsible disclosure look like? I’ll take Node.js’s security disclosure policy as an example. They ask that all security issues that are found in Node.js are reported there. (There’s a separate process for bug found in third-party npm packages). Once you’ve reported a vulnerability, they promise to acknowledge it within 24 hours, and to give a more detailed response within 48 hours. If they find that the issue is indeed a security bug, they’ll give you regular updates about the progress they’re making towards fixing it. As part of this, they’ll figure out which versions are affected, and prepare fixes for them. They’ll assign the vulnerability a CVE (Common Vulnerabilities and Exposures) ID and decide on an embargo date for public disclosure. On the date of the embargo, they announce the vulnerability in their Node.js security mailing list and deploy fixes to nodejs.org. Tim Kadlec published an in-depth article about responsible disclosures if you’re interested in knowing more. It has some interesting horror stories of what happened when the disclosure process was not followed. Encourage responsible disclosure Add a SECURITY.md file to your project so someone who wants to message you about a vulnerability can do so without having to hunt around for contact details. Last year, Snyk published a State of Open Source Security report that found 79.5% of maintainers do not have a public disclosure policy. Those that did were considerably more likely to get notified privately about a vulnerability – 73% of maintainers who had one had been notified, vs 21% of maintainers who hadn’t published one one. Stats from the State of Open Source Security 2017 Bug bounties Some companies run bug bounties to encourage the responsible disclosure of vulnerabilities. By offering a reward for finding and safely disclosing a vulnerability, it also reduces the enticement of exploiting a vulnerability over reporting it and getting a quick cash reward. Hackerone is a community of ethical hackers who pentest apps that have signed up for the scheme and get paid when they find a new vulnerability. Wordpress is one such participant, and you can see the long list of vulnerabilities that have been disclosed as part of that program. If you don’t have such a bounty, be prepared to get the odd vulnerability extortion email. Scott Helme, who founded securityheaders.com and report-uri.com, wrote a post about some of the requests he gets for a report about a critical vulnerability in exchange for money. On one hand, I want to be as responsible as possible and if my users are at risk then I need to know and patch this issue to protect them. On the other hand this is such irresponsible and unethical behaviour that interacting with this person seems out of the question. A gift worth giving It’s time to brush the dust off those old gifts that we shared and forgot about. Practice good hygiene and run them through your favourite security tool – I’m just a little biased towards Snyk, but as Katie mentioned, there’s also npm audit if you use Node.js, and most source code managers like GitHub and GitLab have basic vulnerability alert capabilities. Stats from the State of Open Source Security 2017 Most importantly, patch or upgrade away those vulnerabilities away, and if you want to share that Christmas spirit, open fixes for your favourite open source projects, too. 2018 Anna Debenham annadebenham 2018-12-17T00:00:00+00:00 https://24ways.org/2018/its-beginning-to-look-a-lot-like-xssmas/ code
198 Is Your Website Accidentally Sexist? Women make up 51% of the world’s population. More importantly, women make 85% of all purchasing decisions about consumer goods, 75% of the decisions about buying new homes, and 81% of decisions about groceries. The chances are, you want your website to be as attractive to women as it is to men. But we are all steeped in a male-dominated culture that subtly influences the design and content decisions we make, and some of those decisions can result in a website that isn’t as welcoming to women as it could be. Typography tells a story Studies show that we make consistent judgements about whether a typeface is masculine or feminine: Masculine typography has a square or geometric form with hard corners and edges, and is emphatically either blunt or spiky. Serif fonts are also considered masculine, as is bold type and capitals. Feminine typography favours slim lines, curling or flowing shapes with a lot of ornamentation and embellishment, and slanted letters. Sans-serif, cursive and script fonts are seen as feminine, as are lower case letters. The effect can be so subtle that even choosing between bold and regular styles within a single font family can be enough to indicate masculinity or femininity. If you want to appeal to both men and women, search for fonts that are gender neutral, or at least not too masculine. When you’re choosing groups of fonts that need to work harmoniously together, consider which fonts you are prioritising in your design. Is the biggest word on the page in a masculine or feminine font? What about the smallest words? Is there an imbalance between the prominence of masculine and feminine fonts, and what does this imply? Typography is a language in and of itself, so be careful what you say with it. Colour me unsurprised Colour also has an obvious gender bias. We associate pinks and purples, especially in combination, with girls and women, and a soft pink has become especially strongly related to breast cancer awareness campaigns. On the other hand, pale blue is strongly associated with boys and men, despite the fact that pastels are usually thought of as more feminine. These associations are getting stronger and stronger as more and more marketers use them to define products as “for girls” and “for boys”, setting expectations from an incredibly young age — children as young as four understand gender stereotypes. It should be obvious that using these highly gender-associated colours sends an incredibly strong message to your visitors about who you think your target audience is. If you want to appeal to both men and women, then avoid pinks and pale blues. But men and women also have different colour preferences. Men tend to prefer intense primary colours and deeper colours (shades), and tolerate greys better, whilst women prefer pastels (tints). When choosing colours, consider not just the hue itself, but also tint, tone and shade. Slightly counterintuitively, everyone likes blue, but no one seems to particularly like brown or orange. A picture is worth a thousand words, or none Stock photos are the quickest and easiest way to add a little humanity to your website, directly illustrating the kind of people you believe are in your audience. But the wrong photo can put a woman off before she’s even read your text. A website about a retirement home will, for example, obviously include photos of older people, and a baby clothes retailer will obviously show photos of babies. But, in the latter case, should they also show only photographs of mothers with their children, or should they include fathers too? It’s true that women take on the majority of childcare responsibilities, but that’s a cultural holdover from a previous era, rather than some rule of law. We are seeing increasing number of stay at home dads as well as single dads, so showing only photographs of women both enforces the stereotype that only women can care, as well as marginalising male carers. Equally, featuring prominent photographs of women on sites about male-dominated topics such as science, technology or engineering help women feel welcomed and appreciated in those fields. Photos really do speak volumes, so make sure that you also represent other marginalised groups, especially ethnic groups. If people do not see themselves represented on your site, they are not going to engage with it as much as they might. Another form of picture that we often ignore is the icon. When you do use icons, make sure that they are gender neutral. For example, avoid using a icon of a man to denote engineers, or of a woman to denote nurses. Avoid overly masculine or feminine metaphors, such as a hammer to denote DIY or a flower to denote gardens. Not only are these gendered, they’re also trite and unappealing, so come up with more exciting and novel metaphors. Use gender-neutral language Last, but not least, be very careful in your use of gender in language. Pronouns are an obvious pitfall. A lot of web content is written in the second person, using the cleary gender neutral ‘you’, but if you have to write in the third person, which uses ‘she’, ‘he’, ‘it’, and ‘they’, then be very careful which pronouns you use. The singular ‘they’ is becoming more widely acceptable, and is a useful gender-neutral option. If you must use generic ‘he’ and ‘she’, (as opposed to talking about a specific person), then vary the order that they come in, so don’t always put the male pronoun first. When you are talking about people, make sure that you use the same level of formality for both men and women. The tendency is to refer to men by their surname and women by their first name so, for example, when people are talking about Ada Lovelace and Charles Babbage, they often talk about “Ada and Babbage”, rather than “Lovelace and Babbage” or “Ada and Charles”. As a rule, it’s best to use people’s surnames in formal and semi-formal writing, and their first names only in very informal writing. It’s also very important to make sure that you respect people’s honorifics, especially academic titles such as Dr or Professor, and that you use titles consistently. Studies show that women and people of colour are the most likely to have their honorifics dropped, which is not only disrespectful, it gives readers the idea that women and people of colour are less qualified than white men. If you mention job titles, avoid old-fashioned gendered titles such as ‘chairman’, and instead look for a neutral version, like ‘chair’ or ‘chairperson’. Where neutral terms have strong gender associations, such as nurse or engineer, take special care that the surrounding text, especially pronouns, is diverse and/or neutral. Do not assume engineers are male and nurses female. More subtle intimations of gender can be found in the descriptors people use. Military metaphors and phrases, out-sized claims, competitive words, and superlatives are masculine, such as ‘ground-breaking’, ‘best’, ‘genius’, ‘world-beating’, or ‘killer’. Excessive unnecessary factual detail is also very masculine. Women tend to relate to more cooperative, non-competitive, future-focused, and warmer language, paired with more general information. Women’s language includes word like ’global’, ‘responsive’, ‘support’, ‘include’, ‘engage’ and ‘imagine’. Focus more on the kind of relationship you can build with your customers, how you can help make their lives easier, and less on your company or product’s status. Smash the patriarchy, one assumption at a time We’re all brought up in a cultural stew that prioritises men’s needs, feelings and assumptions over women’s. This is the patriarchy, and it’s been around for thousands of years. But given women’s purchasing power, adhering to the patriarchy’s norms is unlikely to be good for your business. If you want to tap into the female market, pay attention to the details of your design and content, and make sure that you’re not inadvertently putting women off. A gender neutral website that designs away gender stereotypes will attract both men and women, expanding your market and helping your business flourish. 2017 Suw Charman-Anderson suwcharmananderson 2017-12-20T00:00:00+00:00 https://24ways.org/2017/is-your-website-accidentally-sexist/ content
45 Is Agile Harder for Agencies? I once sat in a pitch meeting and watched a new business exec tell a potential client that his agency followed an agile workflow process at all times. The potential client nodded wisely, and they both agreed that agile was indeed the way to go. The meeting progressed and they signed off on a contract for a massive project, to be delivered in a standard waterfall fashion, with all manner of phases and key deliverables. Of course both of them left the meeting perfectly happy, because neither of them knew nor cared what an agile workflow process might be. That was about five years ago. As 2015 heaves into view I think it’s fair to say that attitudes have changed. Perhaps the same number of people claim to do Agile™ now as in 2010, but I think more of them are telling the truth. As a developer in an agency that works primarily with larger organisations, this year I have started to see a shift from agencies pushing agile methodologies with their clients, to clients requesting and even demanding agile practices from their agencies. Only a couple of years ago this would have been unusual behaviour. So what’s the problem? We should be happy then, no? Those of us in agencies will get to spend more time delivering great products, and less time arguing over out-of-date functional specs or battling through an adversarial change management procedure because somebody had a good idea during development rather than planning. We get to be a little bit more like our brothers and sisters in vaunted teams like the Government Digital Service, which is using agile approaches to great effect on projects that have a real benefit to their users. Almost. Unfortunately, it seems to be the case that adhering to an agile framework such as scrum is more difficult within an agency/client structure than it is for an in-house development team. This is no surprise. The Agile Manifesto was written in 2001 by a group of software developers for their own use. Many of the underlying principles of a framework like Scrum assume the existence of an in-house team, working on a highly technical project, and working for the business that employs them. The agency/client model must to some extent be retrofitted into agile frameworks. It can be done though, and there are plenty of agencies out there doing it well. This article isn’t meant to be another introduction to agile techniques – there are too many of those online already. This article is for people just dipping their toes into this way of working. I’ve laid out a few of the key reasons why adopting a more fully agile approach seems difficult, at least initially, for those of us working in agencies. 1. Agile asks more of your clients When a team adopts Scrum everyone has to get used to a number of unfamiliar roles and rituals. Few team members have a steeper learning curve than the person designated as the product owner. The product owner carries a lot of weight on their shoulders. They have to uphold the overall vision for the project. They are also meant to be the primary author of the project’s user stories (short atomic descriptions of project features which are testable and relate to a real business need). They should own this list of stories (called a backlog) and should be able to prioritise the order in which the stories are developed, to ensure that the project is delivering real value to the business early and often. When a burst of work is completed (bursts of work in Scrum are called sprints), the product owner leads a review or show-and-tell session with the wider project stakeholders. The product owner needs to understand the work that has been completed, and must champion it to the business. Finally, and most importantly, the product owner is responsible for managing the feedback and requests from stakeholders in such a way that they don’t derail the project team’s agreed workload for any given sprint, without upsetting or offending any of the stakeholders – some of whom may outrank the product owner. If you follow that spec, this is a job for a superhuman in any organisational context. And within the agency/client structure this superhuman needs to be client-side for the process to be at its most effective. So your client, who in the past might have briefed a project to an agency team and then had the work presented back to them every few weeks, is now asked to be involved with the team on a daily basis; to fight on behalf of the team when new or difficult requests come in from senior figures within their organisation; and to present the agency’s work to their own colleagues after each sprint. It’s a big change if all that gets dropped into someone’s lap without warning. There are several ways agencies can mitigate this issue. The ScrumAlliance suggests some alternative ways to structure the product owner role. The approach I have taken in the past is simply to start slow, and gradually move more of the product owner role over to the client side as and when they feel comfortable with it. If you’re working together long-term on a project, and you both see tangible improvements in the quality of the work after adopting an agile process, then your client is more likely to be open to further changes as the partnership progresses. 2. My client wants fixed costs, fixed deadlines and a fixed scope I know. Mine too. Of course they do – it is the way that agencies and clients have agreed to work in digital and other creative service industries for a very long time. On both sides of the fence we’re used to thinking about projects in this way. Of the three, fixing scope is the one that agile purists would rail hardest against. The more time we spend working on digital projects, the less sense it makes. James Archer, CEO of UI/UX design agency Forty puts it like this: For me, the Agile approach is really about acknowledging that disturbing truth that every project manager knows, but has trouble admitting. The truth that the project plan is wrong. Scope creep. Change orders. Shifting priorities. New directions. We act shocked and appalled when those things happen during our carefully planned project, even though they happen on every project ever. Successful relationships require trust and honesty, and we shouldn’t be afraid of discussing this aspect of project management. If you do move away from a fixed scope of work, then the other two items (costs and timings) can be fixed – more or less. If you can get your clients to buy into this from a standing start then you are doing well. In fact you probably deserve a promotion. For most of us this is a continual discussion. Anyway, as soon as you’ve made headway on the argument that it makes little or no sense to try and fix the scope of a digital project, you usually run into a related concern, which we’ll look at next. 3. Fear of uncontrolled costs We all know that a dog is for life, not just for Christmas. At this time of year perhaps we should reiterate to everyone that digital products and services also need support and love once we have taken the decision to bring them into the world. More organisations are realising that their investment in digital platforms should be viewed as an operational expenditure rather than a capital expenditure. But from time to time we will find ourselves working on projects for people who have a finite amount of money to invest in a product at a given point in time. When agencies start talking about these projects as rolling investments those responsible can understandably worry about their costs running out of control. There’s another factor at play here. Agile, on the whole, prefers to derive a cost for services from the hours a team spends working on a project. In other industries this is referred to as charging for time and materials, and there seems to be an ingrained distrust in this approach among people in general. See, for example, the Citizens Advice Bureau’s “Top tips for employing a builder”: “Bear in mind that if you pay a daily rate, this makes it easier for a builder to string the work out and get more money so agree what you will do if the job takes longer than expected.” It’s hard not to feel stung if you are in the builder’s shoes here, as we are when we’re talking about our role as an agency. But if you’ve ever haggled with a builder over time and materials, and also moaned about your clients misunderstanding agile methods, take a moment to reflect on the similarities from your client’s point of view. Again, there are some things we can do to mitigate this issue. Some agencies put in place a service level agreement around their team’s velocity (an agile-related term related to how much work a team delivers in any given sprint) and this can help. As the industry moves further towards a long-term approach to investment in digital I hope this fear will subside. But that shift in approach leads to the final concern I want to address. 4. Agency structures need shaking up If you work for a company that has spent many years developing a business model around the waterfall process, you may have to break through many layers of entrenched thinking in order to establish new practices and effect organisational change. There are consultancies that exist specifically to help agencies through their own agile transformation. One of these companies, AgencyAgile, provides a helpful list of common pitfalls. They emphasise the need to look at your whole agency’s structure, rather than simply encouraging project teams to adopt new workflows. Even awesomely run Agile projects can have a limited impact on the overall organization. If you’re serious about changing the way your company approaches projects then try talking to people who sit outside the usual project delivery team. Speak to the finance department if you have one, and try to convince your senior management team if they’re not already on board. And definitely speak to your new business people, who go out there and win the projects you get to work on. It’s these people who need to understand the potential business benefits of working in a new way, and also which of their existing habits and behaviours they might need to change to accommodate a new approach. Otherwise you’ll find yourself with a team of designers, developers and project managers who are ready and waiting to deliver work in an iterative and collaborative way, but by the time they get hold of the project a cost has already been agreed, a deadline has been imposed, and a functional requirements document has been painstakingly put together. Nobody wins in this situation. Conclusion So where should we go from here? I certainly don’t have hard and fast answers – I’m not sure that they exist in a one-size-fits-all approach for agencies. There are plenty of smart people thinking about this problem. It’s a hot topic right now. Earlier in the year a London-based meetup was established called Agile for Agencies. If you’re in the capital and want to discuss these issues with your peers it’s a great opportunity to do so. I’ve mentioned James Archer and Forty already. Both James and Paul Boag have written in the last twelve months on this subject. They both come out on the side of the argument that suggests you adopt agile principles, but don’t have to worry about the rituals if they don’t fit in with your practices. Personally, I think the rituals and the discipline mandated by an agile framework like Scrum can provide a great deal of value to your team, even it if is hard to implement within an agency culture that has traditionally structured its work and its services in another way. In whatever way you figure out the details, when your teams collaborate with your clients rather than work for them at arm’s length, and when everyone prioritises frequent delivery, reflection and iteration over exhaustive scoping and planning, I believe you’ll see a tangible difference in the quality of the work that you create. 2014 Charlie Perrins charlieperrins 2014-12-12T00:00:00+00:00 https://24ways.org/2014/is-agile-harder-for-agencies/ process
322 Introduction to Scriptaculous Effects Gather around kids, because this year, much like in that James Bond movie with Denise Richards, Christmas is coming early… in the shape of scrumptuous smooth javascript driven effects at your every whim. Now what I’m going to do, is take things down a notch. Which is to say, you don’t need to know much beyond how to open a text file and edit it to follow this article. Personally, I for instance can’t code to save my life. Well, strictly speaking, that’s not entirely true. If my life was on the line, and the code needed was really simple and I wasn’t under any time constraints, then yeah maybe I could hack my way out of it But my point is this: I’m not a programmer in the traditional sense of the word. In fact, what I do best, is scrounge code off of other people, take it apart and then put it back together with duct tape, chewing gum and dumb blind luck. No, don’t run! That happens to be a good thing in this case. You see, we’re going to be implementing some really snazzy effects (which are considerably more relevant than most people are willing to admit) on your site, and we’re going to do it with the aid of Thomas Fuchs’ amazing Script.aculo.us library. And it will be like stealing candy from a child. What Are We Doing? I’m going to show you the very basics of implementing the Script.aculo.us javascript library’s Combination Effects. These allow you to fade elements on your site in or out, slide them up and down and so on. Why Use Effects at All? Before get started though, let me just take a moment to explain how I came to see smooth transitions as something more than smoke and mirror-like effects included for with little more motive than to dazzle and make parents go ‘uuh, snazzy’. Earlier this year, I had the good fortune of meeting the kind, gentle and quite knowledgable Matt Webb at a conference here in Copenhagen where we were both speaking (though I will be the first to admit my little talk on Open Source Design was vastly inferior to Matt’s talk). Matt held a talk called Fixing Broken Windows (based on the Broken Windows theory), which really made an impression on me, and which I have since then referred back to several times. You can listen to it yourself, as it’s available from Archive.org. Though since Matt’s session uses many visual examples, you’ll have to rely on your imagination for some of the examples he runs through during it. Also, I think it looses audio for a few seconds every once in a while. Anyway, one of the things Matt talked a lot about, was how our eyes are wired to react to movement. The world doesn’t flickr. It doesn’t disappear or suddenly change and force us to look for the change. Things move smoothly in the real world. They do not pop up. How it Works Once the necessary files have been included, you trigger an effect by pointing it at the ID of an element. Simple as that. Implementing the Effects So now you know why I believe these effects have a place in your site, and that’s half the battle. Because you see, actually getting these effects up and running, is deceptively simple. First, go and download the latest version of the library (as of this writing, it’s version 1.5 rc5). Unzip itand open it up. Now we’re going to bypass the instructions in the readme file. Script.aculo.us can do a bunch of quite advanced things, but all we really want from it is its effects. And by sidestepping the rest of the features, we can shave off roughly 80KB of unnecessary javascript, which is well worth it if you ask me. As with Drew’s article on Easy Ajax with Prototype, script.aculo.us also uses the Prototype framework by Sam Stephenson. But contrary to Drew’s article, you don’t have to download Prototype, as a version comes bundled with script.aculo.us (though feel free to upgrade to the latest version if you so please). So in the unzipped folder, containing the script.aculo.us files and folder, go into ‘lib’ and grab the ‘prototype.js’ file. Move it to whereever you want to store the javascript files. Then fetch the ‘effects.js’ file from the ‘src’ folder and put it in the same place. To make things even easier for you to get this up and running, I have prepared a small javascript snippet which does some checking to see what you’re trying to do. The script.aculo.us effects are all either ‘turn this off’ or ‘turn this on’. What this snippet does, is check to see what state the target currently has (is it on or off?) and then use the necessary effect. You can either skip to the end and download the example code, or copy and paste this code into a file manually (I’ll refer to that file as combo.js): Effect.OpenUp = function(element) { element = $(element); new Effect.BlindDown(element, arguments[1] || {}); } Effect.CloseDown = function(element) { element = $(element); new Effect.BlindUp(element, arguments[1] || {}); } Effect.Combo = function(element) { element = $(element); if(element.style.display == 'none') { new Effect.OpenUp(element, arguments[1] || {}); }else { new Effect.CloseDown(element, arguments[1] || {}); } } Currently, this code uses the BlindUp and BlindDown code, which I personally like, but there’s nothing wrong with you changing the effect-type into one of the other effects available. Now, include the three files in the header of your code, like so: <script src="prototype.js" type="text/javascript"></script> <script src="effects.js" type="text/javascript"></script> <script src="combo.js" type="text/javascript"></script> Now insert the element you want to use the effect on, like so: <div id="content" style="display: none;">Lorem ipsum dolor sit amet.</div> The above element will start out invisible, and when triggered will be revealed. If you want it to start visible, simply remove the style parameter. And now for the trigger <a href="javascript:Effect.Combo('content');">Click Here</a> And that, is pretty much it. Clicking the link should unfold the DIV targeted by the effect, in this case ‘content’. Effect Options Now, it gets a bit long-haired though. The documentation for script.aculo.us is next to non-existing, and because of that you’ll have to do some digging yourself to appreciate the full potentialof these effects. First of all, what kind of effects are available? Well you can go to the demo page and check them out, or you can open the ‘effects.js’ file and have a look around, something I recommend doing regardlessly, to gain an overview of what exactly you’re dealing with. If you dissect it for long enough, you can even distill some of the options available for the various effects. In the case of the BlindUp and BlindDown effect, which we’re using in our example (as triggered from combo.js), one of the things that would be interesting to play with would be the duration of the effect. If it’s too long, it will feel slow and unresponsive. Too fast and it will be imperceptible. You set the options like so: <a href="javascript:Effect.Combo('content', {duration: .2});">Click Here</a> The change from the previous link being the inclusion of , {duration: .2}. In this case, I have lowered the duration to 0.2 second, to really make it feel snappy. You can also go all-out and turn on all the bells and whistles of the Blind effect like so (slowed down to a duration of three seconds so you can see what’s going on): <a href="javascript:Effect.Combo('content', {duration: 3, scaleX: true, scaleContent: true});">Click Here</a> Conclusion And that’s pretty much it. The rest is a matter of getting to know the rest of the effects and their options as well as finding out just when and where to use them. Remember the ancient Chinese saying: Less is more. Download Example I have prepared a very basic example, which you can download and use as a reference point. 2005 Michael Heilemann michaelheilemann 2005-12-12T00:00:00+00:00 https://24ways.org/2005/introduction-to-scriptaculous-effects/ code
323 Introducing UDASSS! Okay. What’s that mean? Unobtrusive Degradable Ajax Style Sheet Switcher! Boy are you in for treat today ‘cause we’re gonna have a whole lotta Ajaxifida Unobtrucitosity CSS swappin’ Fun! Okay are you really kidding? Nope. I’ve even impressed myself on this one. Unfortunately, I don’t have much time to tell you the ins and outs of what I actually did to get this to work. We’re talking JavaScript, CSS, PHP…Ajax. But don’t worry about that. I’ve always believed that a good A.P.I. is an invisible A.P.I… and this I felt I achieved. The only thing you need to know is how it works and what to do. A Quick Introduction Anyway… First of all, the idea is very simple. I wanted something just like what Paul Sowden put together in Alternative Style: Working With Alternate Style Sheets from Alistapart Magazine EXCEPT a few minor (not-so-minor actually) differences which I’ve listed briefly below: Allow users to switch styles without JavaScript enabled (degradable) Preventing the F.O.U.C. before the window ‘load’ when getting preferred styles Keep the JavaScript entirely off our markup (no onclick’s or onload’s) Make it very very easy to implement (ok, Paul did that too) What I did to achieve this was used server-side cookies instead of JavaScript cookies. Hence, PHP. However this isn’t a “PHP style switcher” – which is where Ajax comes in. For the extreme technical folks, no, there is no xml involved here, or even a callback response. I only say Ajax because everyone knows what ‘it’ means. With that said, it’s the Ajax that sets the cookies ‘on the fly’. Got it? Awesome! What you need Luckily, I’ve done the work for you. It’s all packaged up in a nice zip file (at the end…keep reading for now) – so from here on out, just follow these instructions As I’ve mentioned, one of the things we’ll be working with is PHP. So, first things first, open up a file called index and save it with a ‘.php’ extension. Next, place the following text at the top of your document (even above your DOCTYPE) <?php require_once('utils/style-switcher.php'); // style sheet path[, media, title, bool(set as alternate)] $styleSheet = new AlternateStyles(); $styleSheet->add('css/global.css','screen,projection'); // [Global Styles] $styleSheet->add('css/preferred.css','screen,projection','Wog Standard'); // [Preferred Styles] $styleSheet->add('css/alternate.css','screen,projection','Tiny Fonts',true); // [Alternate Styles] $styleSheet->add('css/alternate2.css','screen,projection','Big O Fonts',true); // // [Alternate Styles] $styleSheet->getPreferredStyles(); ?> The way this works is REALLY EASY. Pay attention closely. Notice in the first line we’ve included our style-switcher.php file. Next we instantiate a PHP class called AlternateStyles() which will allow us to configure our style sheets. So for kicks, let’s just call our object $styleSheet As part of the AlternateStyles object, there lies a public method called add. So naturally with our $styleSheet object, we can call it to (da – da-da-da!) Add Style Sheets! How the add() method works The add method takes in a possible four arguments, only one is required. However, you’ll want to add some… since the whole point is working with alternate style sheets. $path can simply be a uri, absolute, or relative path to your style sheet. $media adds a media attribute to your style sheets. $title gives a name to your style sheets (via title attribute).$alternate (which shows boolean) simply tells us that these are the alternate style sheets. add() Tips For all global style sheets (meaning the ones that will always be seen and will not be swapped out), simply use the add method as shown next to // [Global Styles]. To add preferred styles, do the same, but add a ‘title’. To add the alternate styles, do the same as what we’ve done to add preferred styles, but add the extra boolean and set it to true. Note following when adding style sheets Multiple global style sheets are allowed You can only have one preferred style sheet (That’s a browser rule) Feel free to add as many alternate style sheets as you like Moving on Simply add the following snippet to the <head> of your web document: <script type="text/javascript" src="js/prototype.js"></script> <script type="text/javascript" src="js/common.js"></script> <script type="text/javascript" src="js/alternateStyles.js"></script> <?php $styleSheet->drop(); ?> Nothing much to explain here. Just use your copy & paste powers. How to Switch Styles Whether you knew it or not, this baby already has the built in ‘ubobtrusive’ functionality that lets you switch styles by the drop of any link with a class name of ‘altCss‘. Just drop them where ever you like in your document as follows: <a class="altCss" href="index.php?css=Bog_Standard">Bog Standard</a> <a class="altCss" href="index.php?css=Really_Small_Fonts">Small Fonts</a> <a class="altCss" href="index.php?css=Large_Fonts">Large Fonts</a> Take special note where the file is linking to. Yep. Just linking right back to the page we’re on. The only extra parameters we pass in is a variable called ‘css’ – and within that we append the names of our style sheets. Also take very special note on the names of the style sheets have an under_score to take place of any spaces we might have. Go ahead… play around and change the style sheet on the example page. Try disabling JavaScript and refreshing your browser. Still works! Cool eh? Well, I put this together in one night so it’s still a work in progress and very beta. If you’d like to hear more about it and its future development, be sure stop on by my site where I’ll definitely be maintaining it. Download the beta anyway Well this wouldn’t be fun if there was nothing to download. So we’re hooking you up so you don’t go home (or logoff) unhappy Download U.D.A.S.S.S | V0.8 Merry Christmas! Thanks for listening and I hope U.D.A.S.S.S. has been well worth your time and will bring many years of Ajaxy Style Switchin’ Fun! Many Blessings, Merry Christmas and have a great new year! 2005 Dustin Diaz dustindiaz 2005-12-18T00:00:00+00:00 https://24ways.org/2005/introducing-udasss/ code
126 Intricate Fluid Layouts in Three Easy Steps The Year of the Script may have drawn attention away from CSS but building fluid, multi-column, cross-browser CSS layouts can still be as unpleasant as a lump of coal. Read on for a worry-free approach in three quick steps. The layout system I developed, YUI Grids CSS, has three components. They can be used together as we’ll see, or independently. The Three Easy Steps Choose fluid or fixed layout, and choose the width (in percents or pixels) of the page. Choose the size, orientation, and source-order of the main and secondary blocks of content. Choose the number of columns and how they distribute (for example 50%-50% or 25%-75%), using stackable and nestable grid structures. The Setup There are two prerequisites: We need to normalize the size of an em and opt into the browser rendering engine’s Strict Mode. Ems are a superior unit of measure for our case because they represent the current font size and grow as the user increases their font size setting. This flexibility—the container growing with the user’s wishes—means larger text doesn’t get crammed into an unresponsive container. We’ll use YUI Fonts CSS to set the base size because it provides consistent-yet-adaptive font-sizes while preserving user control. The second prerequisite is to opt into Strict Mode (more info on rendering modes) by declaring a Doctype complete with URI. You can choose XHTML or HTML, and Transitional or Strict. I prefer HTML 4.01 Strict, which looks like this: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> Including the CSS A single small CSS file powers a nearly-infinite number of layouts thanks to a recursive system and the interplay between the three distinct components. You could prune to a particular layout’s specific needs, but why bother when the complete file weighs scarcely 1.8kb uncompressed? Compressed, YUI Fonts and YUI Grids combine for a miniscule 0.9kb over the wire. You could save an HTTP request by concatenating the two CSS files, or by adding their contents to your own CSS, but I’ll keep them separate for now: <link href="fonts.css" rel="stylesheet" type="text/css"> <link href="grids.css" rel="stylesheet" type="text/css"> Example: The Setup Now we’re ready to build some layouts. Step 1: Choose Fluid or Fixed Layout Choose between preset widths of 750px, 950px, and 100% by giving a document-wrapping div an ID of doc, doc2, or doc3. These options cover most use cases, but it’s easy to define a custom fixed width. The fluid 100% grid (doc3) is what I’ve been using almost exclusively since it was introduced in the last YUI released. <body> <div id="doc3"></div> </body> All pages are centered within the viewport, and grow with font size. The 100% width page (doc3) preserves 10px of breathing room via left and right margins. If you prefer your content flush to the viewport, just add doc3 {margin:auto} to your CSS. Regardless of what you choose in the other two steps, you can always toggle between these widths and behaviors by simply swapping the ID value. It’s really that simple. Example: 100% fluid layout Step 2: Choose a Template Preset This is perhaps the most frequently omitted step (they’re all optional), but I use it nearly every time. In a source-order-independent way (good for accessibility and SEO), “Template Presets” provide commonly used template widths compatible with ad-unit dimension standards defined by the Interactive Advertising Bureau, an industry association. Choose between the six Template Presets (.yui-t1 through .yui-t6) by setting the class value on the document-wrapping div established in Step 1. Most frequently I use yui-t3, which puts the narrow secondary block on the left and makes it 300px wide. <body> <div id="doc3" class="yui-t3"></div> </body> The Template Presets control two “blocks” of content, which are defined by two divs, each with yui-b (“b” for “block”) class values. Template Presets describe the width and orientation of the secondary block; the main block will take up the rest of the space. <body> <div id="doc3" class="yui-t3"> <div class="yui-b"></div> <div class="yui-b"></div> </div> </body> Use a wrapping div with an ID of yui-main to structurally indicate which block is the main block. This wrapper—not the source order—identifies the main block. <body> <div id="doc3" class="yui-t3"> <div id="yui-main"> <div class="yui-b"></div> </div> <div class="yui-b"></div> </div> </body> Example: Main and secondary blocks sized and oriented with .yui-t3 Template Preset Again, regardless of what values you choose in the other steps, you can always toggle between these Template Presets by toggling the class value of your document-wrapping div. It’s really that simple. Step 3: Nest and Stack Grid Structures. The bulk of the power of the system is in this third step. The key is that columns are built by parents telling children how to behave. By default, two children each consume half of their parent’s area. Put two units inside a grid structure, and they will sit side-by-side, and they will each take up half the space. Nest this structure and two columns become four. Stack them for rows of columns. An Even Number of Columns The default behavior creates two evenly-distributed columns. It’s easy. Define one parent grid with .yui-g (“g” for grid) and two child units with .yui-u (“u” for unit). The code looks like this: <div class="yui-g"> <div class="yui-u first"></div> <div class="yui-u"></div> </div> Be sure to indicate the “first“ unit because the :first-child pseudo-class selector isn’t supported across all A-grade browsers. It’s unfortunate we need to add this, but luckily it’s not out of place in the markup layer since it is structural information. Example: Two evenly-distributed columns in the main content block An Odd Number of Columns The default system does not work for an odd number of columns without using the included “Special Grids” classes. To create three evenly distributed columns, use the “yui-gb“ Special Grid: <div class="yui-gb"> <div class="yui-u first"></div> <div class="yui-u"></div> <div class="yui-u"></div> </div> Example: Three evenly distributed columns in the main content block Uneven Column Distribution Special Grids are also used for unevenly distributed column widths. For example, .yui-ge tells the first unit (column) to take up 75% of the parent’s space and the other unit to take just 25%. <div class="yui-ge"> <div class="yui-u first"></div> <div class="yui-u"></div> </div> Example: Two columns in the main content block split 75%-25% Putting It All Together Start with a full-width fluid page (div#doc3). Make the secondary block 180px wide on the right (div.yui-t4). Create three rows of columns: Three evenly distributed columns in the first row (div.yui-gb), two uneven columns (66%-33%) in the second row (div.yui-gc), and two evenly distributed columns in the thrid row. <body> <!-- choose fluid page and Template Preset --> <div id="doc3" class="yui-t4"> <!-- main content block --> <div id="yui-main"> <div class="yui-b"> <!-- stacked grid structure, Special Grid "b" --> <div class="yui-gb"> <div class="yui-u first"></div> <div class="yui-u"></div> <div class="yui-u"></div> </div> <!-- stacked grid structure, Special Grid "c" --> <div class="yui-gc"> <div class="yui-u first"></div> <div class="yui-u"></div> </div> <!-- stacked grid structure --> <div class="yui-g"> <div class="yui-u first"></div> <div class="yui-u"></div> </div> </div> </div> <!-- secondary content block --> <div class="yui-b"></div> </div> </body> Example: A complex layout. Wasn’t that easy? Now that you know the three “levers” of YUI Grids CSS, you’ll be creating headache-free fluid layouts faster than you can say “Peace on Earth”. 2006 Nate Koechley natekoechley 2006-12-20T00:00:00+00:00 https://24ways.org/2006/intricate-fluid-layouts/ code
295 Internet of Stranger Things This year I’ve been running a workshop about using JavaScript and Node.js to work with all different kinds of electronics on the Raspberry Pi. So especially for 24 ways I’m going to show you how I made a very special Raspberry Pi based internet connected project! And nothing says Christmas quite like a set of fairy lights connected to another dimension1. What you’ll see You can rig up the fairy lights in your home, with the scrawly letters written under each one. The people from the other side (i.e. the internet) will be able to write messages to you from their browser in real time. In fact why not try it now; check this web page. When you click the lights in your browser, my lights (and yours) will turn on and off in real life! (There may be a queue if there are lots of people accessing it, hit the “Send a message” button and wait your turn.) It’s all done with JavaScript, using Node.js running on both the Raspberry Pi and on the server. I’m using WebSockets to communicate in real time between the browser, server and Raspberry Pi. What you’ll need Raspberry Pi any of the following models: Zero (will need straight male header pins soldered2 and Micro USB OTG adaptor), A+, B+, 2, or 3 Micro SD card at least 4Gb Class 10 speed3 Micro USB power supply at least 2A USB Wifi dongle (unless you have a Pi 3 - that has wifi built in). Addressable fairy lights Logic level shifter (with pins soldered unless you want to do it!) Breadboard Jumper wires (3x male to male and 4x female to male) Optional but recommended Base board to hold the Pi and Breadboard (often comes with a breadboard!) Find links for where to buy all of these items that goes along with this tutorial. The total price should be around $1004. Setting up the Raspberry Pi You’ll need to install the SD card for the Raspberry Pi. You’ll find a link to download a disk image on the support document, ready-made with the Raspbian version of Linux, along with Node.js and all the files you need. Download it and write it to the SD card using the fantastic free software Etcher5. Next up you have to configure the wifi details on the SD card. If you plug the card into your computer you should see a drive called BOOT. There’s a text file on there called wpa_supplicant.conf. Open it up in your favourite text editor and replace mywifi and mypassword with your wifi details6. network={ ssid="mywifi" psk="mypassword" } Save the file, eject the card from your computer and plug it into the Raspberry Pi. If you have a base board or holder for the Raspberry Pi, attach it now. Then connect the wifi USB dongle7 and power supply, but don’t plug it in yet! Wiring! Time to wire everything up! First of all, push the Logic Level Converter into the middle of the breadboard: Logic Level Converter The logic level converter may be labelled differently from the one in the diagram but the pins are usually exactly the same internally. I would just make sure the pins marked HV (High Voltage) are on the bottom and LV (Low Voltage) are on the top. Raspberry Pi pins only output 3.3v but the lights need 5v. That’s why we need the logic level converter in there to boost up the signal. Connect the first two wires between the Raspberry Pi pins and the breadboard: Note that the pins on the Raspberry Pi are male, so you need a female to male jumper wire to connect between them and the breadboard. The colours don’t have to match but it’s easier to follow (and check) if you use the same ones as in the diagram. Then the next two: This is what you should have so far: Lights Now to connect the lights! My ones have a connector with three holes in it that I can push jumper wires into, and hopefully yours will too! So I used the male-to-male jumper wires to connect them to the breadboard. Make sure that you connect the right end of the lights, mine has a male connector at the wrong end so it’s impossible to do this, but double check. Also make sure that the holes in the light connector are the same as mine. To do this, follow the wires from the connector to the first light and look at the circuit board inside. You should just about be able to make out the connections labelled + (sometimes 5V, V+ or VCC), GND (or ‘-’ or G) and DI (sometimes DIN for data in). You can just about make out the +, DI and GND on this picture. Note that on the other side of the board there is a DO for data out - that’s what takes the data along to the chip in the next light. Make sure that you’re plugging into the data-in and not the data-out! That’s it! Everything’s plugged in and ready to go! But before you plug power into your Pi, double check all your wires and make sure they’re exactly right! You could damage your Raspberry Pi if it is not wired correctly. So triple check! The Moment of Truth! Plug in the Raspberry Pi and wait around a minute or two for it to boot up. If all is well, the lights should strobe rainbow colours for one second - that’s your confirmation that it’s connected to my WebSocket server and ready to receive messages from the upside-down! However, if the first light in the string is pulsing red, it means that you’re not connected to the internet. So check the Troubleshooting section of the support document. If it’s pulsing green then you’re connected to the internet but can’t connect to my server. It must have gone down. Sorry! The code will keep trying so leave it running and maybe it’ll come back up. Rig up the lights! Fix the lights up on the wall however you want, pins, nails, tape. I’ve used cable clips. Just be careful! I’m using a 50 light string so I’ve programmed it to use the lights at the end for the letters. That way I have just under half the string to extend down to the floor where I can keep the Raspberry Pi. Check the photo here to see how the lights line up, note that there are spare unused lights in-between each row: Now visit lights.seb.ly and you’ll see this : If you’re the only one online you’ll have direct connection to the lights and any letter you click on will light up both in the browser and in real life. If there are other people there, you’ll need to click the button to join the queue and wait your turn. How it works - the geeky details! Electronics: The pins on the Raspberry Pi are known as GPIO pins, general-purpose input/output. You can connect a wide variety of electronic components to them, LED lights, buttons, switches, and sensors. You can turn the power to the pins on and off using Node.js (or Python, if you prefer). Addressable LEDs or “Neopixels” We’re only using one GPIO pin on the Raspberry Pi (the other connections are 5V, 3.3V and ground) and that single pin is controlling all of the lights in the string. The code turns the pin on and off really fast in strictly timed morse-code-like dots and dashes to transmit binary data. The chips attached to each LED decode the binary and adjust the output to the LED accordingly. That chip then sends the data on to the next light in the string. The chips on each light are the WS2811, part of the WS281x family that come in a multitude of different form factors and are often packaged with tiny LEDs in a single component. They are commonly referred to as Neopixels8 and I used them on my Laser Light Synths project. Neopixels with the chip and the LED all in one - it’s the white square shaped component and the darker square inside is the chip. These are only 5mm wide! A Laser Light Synth! Covered with around 800 super bright neopixels! Logic Level Converter The logic level converter is a really cheap and easy way to change the level from 3.3v to 5v and back again. You must be careful that you do not connect 5v into a GPIO pin or you will most likely damage the Raspberry Pi processor chip. Power Neopixels can often draw a lot of current so you need to be careful how you power them. I’ve measured the current draw from the string to be less than 800mA so you should be fine wired directly to the 5V output. But if you use more lights or have them all on really bright at once, you’ll need to use a separate 5V power supply. If you want to learn more, check out Adafruit’s Neopixel Uberguide. Node.js There are two Node.js apps running here, one on the Raspberry Pi and one on my server. You can see the code on my GitHub at github.com/sebleedelisle/stranger-lights for the Raspberry Pi and github.com/sebleedelisle/stranger-lights-server for the server. And they’re hosted on npm as stranger-lights and stranger-lights-server. The server side code sets up a standard web server to deliver the HTML for the web interface. It also sets up a WebSocket server that allows for real-time communication between the browser and the server. This server code also manages the queue and who is in control of the lights at any given time. WebSockets I’m using the excellent Socket.io library to manage the WebSocket connection. Both the browser and the Raspberry Pi Node.js app connects to my WebSocket server. When you click on a letter in the browser, a message is sent to the server, which forwards it to the connected Raspberry Pi clients and also all the web browsers9. The Raspberry Pi code The Node.js app runs automatically on startup, and I made this happen by adding this to the /etc/rc.local file: node /home/pi/strangerthings/client.js > /dev/null & Anything in the rc.local file gets executed when the Pi boots up and this line of code runs the Node.js app and routes its output to nowhere (ie /dev/null). The & means that it runs it in the background and doesn’t hold up the boot process. Working with the Raspberry Pi headless You might know that when a computer has no screen or keyboard, you would refer to it as “running headless”. So just like most web servers, you need to configure it over the network with ssh10. If you’re on a mac you can find your Pi on the network through the name raspberrypi.local11, otherwise you’ll need to find its IP address. There’s more on the guide to Remote Access instructions on the Raspberry Pi website. And if you’re very new to the terminal, I highly recommend this great online Linux command line tutorial. Improvements This is quite an early experiment and I’m sure I’ll discover lots of optimisations over the next few weeks, especially if the server gets a proper hammering today! But there are a few things you can do. Obviously I’ve just rigged up my lights with Post-it notes. It’d be a lot nicer to get a paint brush and try to recreate the Winona-in-a-manic-state text style. Where next? Finding quality resources about Node.js for electronics on the Pi can be somewhat hit and miss, but this is getting better all the time. Alternatively I am thinking about running some online courses, please let me know if that’s something you’d be interested in, or sign up to my mailing list at st4i.com. There are many many more resources for the Raspberry Pi with Python (gpiozero is a good place to start), so if that language works for you, you’ll be spoilt for choice! Also take a look at Arduino - it’s an incredibly popular platform for electronics and the internet is literally bursting with resources. I hope you enjoyed this little foray into the world of JavaScript electronics on the Raspberry Pi! If you get this working at home please let me know! Tweet me at @seb_ly. Not a particularly original idea, but I don’t think I’ve seen anyone do it quite like this before, ie using WebSockets, and Node.js on a Raspberry Pi. Other examples: Internet of Stranger Things, Strangerlights.com, and loads of examples on Instructables ↩︎ Video guide to soldering pins on to a Pi Zero and further soldering advice from Adafruit ↩︎ Slower cards will work but performance may suffer ↩︎ Or £5,000 in UK money. Sorry, Brexit joke :) ↩︎ You will need a card reader on your computer - most micro SD cards come with an adaptor that fits standard SD slots.  ↩︎ SSID and password should be all that you need but you can see all the config options on this wpa supplicant guide ↩︎ Raspberry Pi Zero will require the OTG to USB adaptor to attach the wifi dongle ↩︎ Thanks to Adafruit who invented the term neopixels so we don’t have to refer to them as WS281x any more! ↩︎ So you can see other people sending messages in the browser ↩︎ ssh is short for Secure Shell and is a way to connect to a remote computer and type in it just like you would in the terminal. ↩︎ You can change this default hostname using raspi-config ↩︎ 2016 Seb Lee-Delisle sebleedelisle 2016-12-01T00:00:00+00:00 https://24ways.org/2016/internet-of-stranger-things/ code
26 Integrating Contrast Checks in Your Web Workflow It’s nearly Christmas, which means you’ll be sure to find an overload of festive red and green decorating everything in sight—often in the ugliest ways possible. While I’m not here to battle holiday tackiness in today’s 24 ways, it might just be the perfect reminder to step back and consider how we can implement colour schemes in our websites and apps that are not only attractive, but also legible and accessible for folks with various types of visual disabilities. This simulated photo demonstrates how red and green Christmas baubles could appear to a person affected by protanopia-type colour blindness—not as festive as you might think. Source: Derek Bruff I’ve been fortunate to work with Simply Accessible to redesign not just their website, but their entire brand. Although the new site won’t be launching until the new year, we’re excited to let you peek under the tree and share a few treats as a case study into how we tackled colour accessibility in our project workflow. Don’t worry—we won’t tell Santa! Create a colour game plan A common misconception about accessibility is that meeting compliance requirements hinders creativity and beautiful design—but we beg to differ. Unfortunately, like many company websites and internal projects, Simply Accessible has spent so much time helping others that they had not spent enough time helping themselves to show the world who they really are. This was the perfect opportunity for them to practise what they preached. After plenty of research and brainstorming, we decided to evolve the existing Simply Accessible brand. Or, rather, salvage what we could. There was no established logo to carry into the new design (it was a stretch to even call it a wordmark), and the Helvetica typography across the site lacked any character. The only recognizable feature left to work with was colour. It was a challenge, for sure: the oranges looked murky and brown, and the blues looked way too corporate for a company like Simply Accessible. We knew we needed to inject a lot of personality. The old Simply Accessible website and colour palette. After an audit to round up every colour used throughout the site, we dug in deep and played around with some ideas to bring some new life to this palette. Choose effective colours Whether you’re starting from scratch or evolving an existing brand, the first step to having an effective and legible palette begins with your colour choices. While we aren’t going to cover colour message and meaning in this article, it’s important to understand how to choose colours that can be used to create strong contrast—one of the most important ways to create hierarchy, focus, and legibility in your design. There are a few methods of creating effective contrast. Light and dark colours The contrast that exists between light and dark colours is the most important attribute when creating effective contrast. Try not to use colours that have a similar lightness next to each other in a design. The red and green colours on the left share a similar lightness and don’t provide enough contrast on their own without making some adjustments. Removing colour and showing the relationship in greyscale reveals that the version on the right is much more effective. It’s important to remember that red and green colour pairs cause difficulty for the majority of colour-blind people, so they should be avoided wherever possible, especially when placed next to each other. Complementary contrast Effective contrast can also be achieved by choosing complementary colours (other than red and green), that are opposite each other on a colour wheel. These colour pairs generally work better than choosing adjacent hues on the wheel. Cool and warm contrast Contrast also exists between cool and warm colours on the colour wheel. Imagine a colour wheel divided into cool colours like blues, purples, and greens, and compare them to warm colours like reds, oranges and yellows. Choosing a dark shade of a cool colour, paired with a light tint of a warm colour will provide better contrast than two warm colours or two cool colours. Develop colour concepts After much experimentation, we settled on a simple, two-colour palette of blue and orange, a cool-warm contrast colour scheme. We added swatches for call-to-action messaging in green, error messaging in red, and body copy and form fields in black and grey. Shades and tints of blue and orange were added to illustrations and other design elements for extra detail and interest. First stab at a new palette. We introduced the new palette for the first time on an internal project to test the waters before going full steam ahead with the website. It gave us plenty of time to get a feel for the new design before sharing it with the public. Putting the test palette into practice with an internal report It’s important to be open to changes in your palette as it might need to evolve throughout the design process. Don’t tell your client up front that this palette is set in stone. If you need to tweak the colour of a button later because of legibility issues, the last thing you want is your client pushing back because it’s different from what you promised. As it happened, we did tweak the colours after the test run, and we even adjusted the logo—what looked great printed on paper looked a little too light on screens. Consider how colours might be used Don’t worry if you haven’t had the opportunity to test your palette in advance. As long as you have some well-considered options, you’ll be ready to think about how the colour might be used on the site or app. Obviously, in such early stages it’s unlikely that you’re going to know every element or feature that will appear on the site at launch time, or even which design elements could be introduced to the site later down the road. There are, of course, plenty of safe places to start. For Simply Accessible, I quickly mocked up these examples in Illustrator to get a handle on the elements of a website where contrast and legibility matter the most: text colours and background colours. While it’s less important to consider the contrast of decorative elements that don’t convey essential information, it’s important for a reader to be able to discern elements like button shapes and empty form fields. A basic list of possible colour combinations that I had in mind for the Simply Accessible website Run initial tests Once these elements were laid out, I manually plugged in the HTML colour code of each foreground colour and background colour on Lea Verou’s Contrast Checker. I added the results from each colour pair test to my document so we could see at a glance which colours needed adjustment or which colours wouldn’t work at all. Note: Read more about colour accessibility and contrast requirements As you can see, a few problems were revealed in this test. To meet the minimum AA compliance, we needed to slightly darken the green, blue, and orange background colours for text—an easy fix. A more complicated problem was apparent with the button colours. I had envisioned some buttons appearing over a blue background, but the contrast ratios were well under 3:1. Although there isn’t a guide in WCAG for contrast requirements of two non-text elements, the ISO and ANSI standard for visible contrast is 3:1, which is what we decided to aim for. We also checked our colour combinations in Color Oracle, an app that simulates the most extreme forms of colour blindness. It confirmed that coloured buttons over blue backgrounds was simply not going to work. The contrast was much too low, especially for the more common deuteranopia and protanopia-type deficiencies. How our proposed colour pairs could look to people with three types of colour blindness Make adjustments if necessary As a solution, we opted to change all buttons to white when used over dark coloured backgrounds. In addition to increasing contrast, it also gave more consistency to the button design across the site instead of introducing a lot of unnecessary colour variants. Putting more work into getting compliant contrast ratios at this stage will make the rest of implementation and testing a breeze. When you’ve got those ratios looking good, it’s time to move on to implementation. Implement colours in style guide and prototype Once I was happy with my contrast checks, I created a basic style guide and added all the colour values from my colour exploration files, introduced more tints and shades, and added patterned backgrounds. I created examples of every panel style we were planning to use on the site, with sample text, links, and buttons—all with working hover states. Not only does this make it easier for the developer, it allows you to check in the browser for any further contrast issues. Run a final contrast check During the final stages of testing and before launch, it’s a good idea to do one more check for colour accessibility to ensure nothing’s been lost in translation from design to code. Unless you’ve introduced massive changes to the design in the prototype, it should be fairly easy to fix any issues that arise, particularly if you’ve stayed on top of updating any revisions in the style guide. One of the more well-known evaluation tools, WAVE, is web-based and will work in any browser, but I love using Chrome’s Accessibility Tools. Not only are they built right in to the Inspector, but they’ll work if your site is password-protected or private, too. Chrome’s Accessibility Tools audit feature shows that there are no immediate issues with colour contrast in our prototype The human touch Finally, nothing beats a good round of user testing. Even evaluation tools have their flaws. Although they’re great at catching contrast errors for text and backgrounds, they aren’t going to be able to find errors in non-text elements, infographics, or objects placed next to each other where discernible contrast is important. Our final palette, compared with our initial ideas, was quite different, but we’re proud to say it’s not just compliant, but shows Simply Accessible’s true personality. Who knows, it may not be final at all—there are so many opportunities down the road to explore and expand it further. Accessibility should never be an afterthought in a project. It’s not as simple as adding alt text to images, or running your site through a compliance checker at the last minute and assuming that a pass means everything is okay. Considering how colour will be used during every stage of your project will help avoid massive problems before launch, or worse, launching with serious issues. If you find yourself working on a personal project over the Christmas break, try integrating these checks into your workflow and make colour accessibility a part of your New Year’s resolutions. 2014 Geri Coady gericoady 2014-12-22T00:00:00+00:00 https://24ways.org/2014/integrating-contrast-checks-in-your-web-workflow/ design
291 Information Literacy Is a Design Problem Information literacy, wrote Dr. Carol Kulthau in her 1987 paper “Information Skills for an Information Society,” is “the ability to read and to use information essential for everyday life”—that is, to effectively navigate a world built on “complex masses of information generated by computers and mass media.” Nearly thirty years later, those “complex masses of information” have only grown wilder, thornier, and more constant. We call the internet a firehose, yet we’re loathe to turn it off (or even down). The amount of information we consume daily is staggering—and yet our ability to fully understand it all remains frustratingly insufficient. This should hit a very particular chord for those of us working on the web. We may be developers, designers, or strategists—we may not always be responsible for the words themselves—but we all know that communication is much more than just words. From fonts to form fields, every design decision that we make changes the way information is perceived—for better or for worse. What’s more, the design decisions that we make feed into larger patterns. They don’t just affect the perception of a single piece of information on a single site; they start to shape reader expectations of information anywhere. Users develop cumulative mental models of how websites should be: where to find a search bar, where to look at contact information, how to filter a product list. And yet: our models fail us. Fundamentally, we’re not good at parsing information, and that’s troubling. Our experience of an “information society” may have evolved, but the skills Dr. Kuhlthau spoke of are even more critical now: our lives depend on information literacy. Patterns from words Let’s start at the beginning: with the words. Our choice of words can drastically alter a message, from its emotional resonance to its context to its literal meaning. Sometimes we can use word choice for good, to reinvigorate old, forgotten, or unfairly besmirched ideas. One time at a wedding bbq we labeled the coleslaw BRASSICA MIXTA so people wouldn’t skip it based on false hatred.— Eileen Webb (@webmeadow) November 27, 2016 We can also use clever word choice to build euphemisms, to name sensitive or intimate concepts without conjuring their full details. This trick gifts us with language like “the beast with two backs” (thanks, Shakespeare!) and “surfing the crimson wave” (thanks, Cher Horowitz!). But when we grapple with more serious concepts—war, death, human rights—this habit of declawing our language gets dangerous. Using more discrete wording serves to nullify the concepts themselves, euphemizing them out of sight and out of mind. The result? Politicians never lie, they just “misspeak.” Nobody’s racist, but plenty of people are “economically anxious.” Nazis have rebranded as “alt-right.” I’m not an asshole, I’m just alt-nice.— Andi Zeisler (@andizeisler) November 22, 2016 The problem with euphemisms like these is that they quickly infect everyday language. We use the words we hear around us. The more often we see “alt-right” instead of “Nazi,” the more likely we are to use that phrase ourselves—normalizing the term as well as the terrible ideas behind it. Patterns from sentences That process of normalization gets a boost from the media, our main vector of information about the world outside ourselves. Headlines control how we interpret the news that follows—even if the story contradicts it in the end. We hear the framing more clearly than the content itself, coloring our interpretation of the news over time. Even worse, headlines are often written to encourage clicks, not to convey critical information. When headline-writing is driven by sensationalism, it’s much, much easier to build a pattern of misinformation. Take this CBS News headline: “Donald Trump: ‘Millions’ voted illegally for Hillary Clinton.” The headline makes no indication that this an objectively false statement; instead, this word choice subtly suggestions that millions did, in fact, illegally vote for Hillary Clinton. Headlines like this are what make lying a worthwhile political strategy. https://t.co/DRjGeYVKmW— Binyamin Appelbaum (@BCAppelbaum) November 27, 2016 This is a deeply dangerous choice of words when headlines are the primary way that news is conveyed—especially on social media, where it’s much faster to share than to actually read the article. In fact, according to a study from the Media Insight Project, “roughly six in 10 people acknowledge that they have done nothing more than read news headlines in the past week.” If a powerful person asserts X there are 2 responsible ways to cover:1. “X is true”2. “Person incorrectly thinks X”Never “Person says X”— Helen Rosner (@hels) November 27, 2016 Even if we do, in fact, read the whole article, there’s no guarantee that we’re thinking critically about it. A study conducted by Stanford found that “82 percent of students could not distinguish between a sponsored post and an actual news article on the same website. Nearly 70 percent of middle schoolers thought they had no reason to distrust a sponsored finance article written by the CEO of a bank, and many students evaluate the trustworthiness of tweets based on their level of detail and the size of attached photos.” Friends: our information literacy is not very good. Luckily, we—workers of the web—are in a position to improve it. Sentences into design Consider the presentation of those all-important headlines in social media cards, as on Facebook. The display is a combination of both the card’s design and the article’s source code, and looks something like this: A large image, a large headline; perhaps a brief description; and, at the bottom, in pale gray, a source and an author’s name. Those choices convey certain values: specifically, they suggest that the headline and the picture are the entire point. The source is so deemphasized that it’s easy to see how fake news gains a foothold: daily exposure to this kind of hierarchy has taught us that sources aren’t important. And that’s the message from the best-case scenario. Not every article shows every piece of data. Take this headline from the BBC: “Wisconsin receives request for vote recount.” With no image, no description, and no author, there’s little opportunity to signal trust or provide nuance. There’s also no date—ever—which presents potentially misleading complications, especially in the context of “breaking news.” And lest you think dates don’t matter in the light-speed era of social media, take the headline, “Maryland sidesteps electoral college.” Shared into my feed two days after the US presidential election, that’s some serious news with major historical implications. But since there’s no date on this card, there’s no way for readers to know that the “Tuesday” it refers to was in 2007. Again, a design choice has made misinformation far too contagious. More recently, I posted my personal reaction to the death of Fidel Castro via a series of twenty tweets. Wanting to share my thoughts with friends and family who don’t use Twitter, I then posted the first tweet to Facebook. The card it generated was less than ideal: The information hierarchy created by this approach prioritizes the name of the Twitter user (not even the handle), along with the avatar. Not only does that create an awkward “headline” (at least when you include a full stop in your name), but it also minimizes the content of the tweet itself—which was the whole point. The arbitrary elevation of some pieces of content over others—like huge headlines juxtaposed with minimized sources—teaches readers that these values are inherent to the content itself: that the headline is the news, that the source is irrelevant. We train readers to stop looking for the information we don’t put in front of them. These aren’t life-or-death scenarios; they are just cases where design decisions noticeably dictate the perception of information. Not every design decision makes so obvious an impact, but the impact is there. Every single action adds to the pattern. Design with intention We can’t necessarily teach people to read critically or vet their sources or stop believing conspiracy theories (or start believing facts). Our reach is limited to our roles: we make websites and products for companies and colleges and startups. But we have more reach there than we might realize. Every decision we make influences how information is presented in the world. Every presentation adds to the pattern. No matter how innocuous our organization, how lowly our title, how small our user base—every single one of us contributes, a little bit, to the way information is perceived. Are we changing it for the better? While it’s always been crucial to act ethically in the building of the web, our cultural climate now requires dedicated, individual conscientiousness. It’s not enough to think ourselves neutral, to dismiss our work as meaningless or apolitical. Everything is political. Every action, and every inaction, has an impact. As Chappell Ellison put it much more eloquently than I can: Every single action and decision a designer commits is a political act. The question is, are you a conscious actor?— Chappell Ellison🤔 (@ChappellTracker) November 28, 2016 As shapers of information, we have a responsibility: to create clarity, to further understanding, to advance truth. Every single one of us must choose to treat information—and the society it builds—with integrity. 2016 Lisa Maria Martin lisamariamartin 2016-12-14T00:00:00+00:00 https://24ways.org/2016/information-literacy-is-a-design-problem/ content
91 Infinite Canvas: Moving Beyond the Page Remember Web 2.0? I do. In fact, that phrase neatly bifurcates my life on the internet. Pre-2.0, I was occupied by chatting on AOL and eventually by learning HTML so I could build sites on Geocities. Around 2002, however, I saw a WYSIWYG demo in Dreamweaver. The instructor was dragging boxes and images around a canvas. With a few clicks he was able to build a dynamic, single-page interface. Coming from the world of tables and inline HTML styles, I was stunned. As I entered college the next year, the web was blossoming: broadband, Wi-Fi, mobile (proud PDA owner, right here), CSS, Ajax, Bloglines, Gmail and, soon, Google Maps. I was a technology fanatic and a hobbyist web developer. For me, the web had long been informational. It was now rapidly becoming something else, something more: sophisticated, presentational, actionable. In 2003 we watched as the internet changed. The predominant theme of those early Web 2.0 years was the withering of Internet Explorer 6 and the triumph of web standards. Upon cresting that mountain, we looked around and collectively breathed the rarefied air of pristine HMTL and CSS, uncontaminated by toxic hacks and forks – only to immediately begin hurtling down the other side at what is, frankly, terrifying speed. Ten years later, we are still riding that rocket. Our days (and nights) are spent cramming for exams on CSS3 and RWD and Sass and RESS. We are the proud, frazzled owners of tiny pocket computers that annihilate the best laptops we could have imagined, and the architects of websites that are no longer restricted to big screens nor even segregated by device. We dragoon our sites into working any time, anywhere. At this point, we can hardly ask the spec developers to slow down to allow us to catch our breath, nor should we. It is, without a doubt, a most wonderful time to be a web developer. But despite the newfound luxury of rounded corners, gradients, embeddable fonts, low-level graphics APIs, and, glory be, shadows, the canyon between HTML and native appears to be as wide as ever. The improvements in HTML and CSS have, for the most part, been conveniences rather than fundamental shifts. What I’d like to do now, if you’ll allow me, is outline just a few of the remaining gaps that continue to separate web sites and applications from their native companions. What I’d like for Christmas There is one irritant which is the grandfather of them all, the one from which all others flow and have their being, and it is, simply, the page refresh. That’s right, the foundational principle of the web is our single greatest foe. To paraphrase a patron saint of designers everywhere, if you see a page refresh, we blew it. The page refresh brings with it, of course, many noble and lovely benefits: addressability, for one; and pagination, for another. (See also caching, resource loading, and probably half a dozen others.) Still, those concerns can be answered (and arguably answered more compellingly) by replacing the weary page with the young and hearty document. Flash may be dead, but it has many lessons yet to bequeath. Preparing a single document when the site loads allows us to engage the visitor in a smooth and engrossing experience. We have long known this, of course. Twitter was not the first to attempt, via JavaScript, to envelop the user in a single-page application, nor the first to abandon it. Our shared task is to move those technologies down the stack, to make them more primitive, so that the next Twitter can be built with the most basic combination of HTML and CSS rather than relying on complicated, slow, and unreliable scripted solutions. So, let’s take a look at what we can do, right now, that we might have a better idea of where our current tools fall short. A print magazine in HTML clothing Like many others, I suspect, one of my earliest experiences with publishing was laying out newsletters and newspapers on a computer for print. If you’ve ever used InDesign or Quark or even Microsoft Publisher, you’ll remember reflowing content from page to page. The advent of the internet signaled, in many ways, the abandonment of that model. Articles were no longer constrained by the physical limitations of paper. In shedding our chains, however, it is arguable that we’ve lost something useful. We had a self-contained and complete package, a closed loop. It was a thing that could be handled and finished, and doing so provided a sense of accomplishment that our modern, infinitely scrolling, ever-fractal web of content has stolen. For our purposes today, we will treat 24 ways as the online equivalent of that newspaper or magazine. A single year’s worth of articles could easily be considered an issue. Right now, navigating between articles means clicking on the article you’d like to view and being taken to that specific address via a page reload. If Drew wanted to, it wouldn’t be difficult to update the page in place (via JavaScript) and change the address (again via JavaScript with the History API) to reflect the new content found at the new location. But what if Drew wanted to do that without JavaScript? And what if he wanted the site to not merely load the content but actually whisk you along the page in a compelling and delightful way, à la the Mag+ demo we all saw a few years ago when the iPad was first introduced? Uh, no. We’re all familiar with websites that have attempted to go beyond the page by weaving many chunks of content together into a large document and for good reason. There is tremendous appeal in opening and exploring the canvas beyond the edges of our screens. In one rather straightforward example from last year, Mozilla contacted Full Stop to build a website promoting Aza Raskin’s proposal for a set of Creative Commons-style privacy icons. Like a lot of the sites we build (including our own), the amount of information we were presenting was minimal. In these instances, we encourage our clients to consider including everything on a single page. The result was a horizontally driven site that was, if not whimsical, at least clever and attractive to the intended audience. An experience that is taken for granted when using device-native technology is utterly, maddeningly impossible to replicate on the web without jumping through JavaScript hoops. In another, more complex example, we again had the pleasure of working with Aza earlier this year, this time on a redesign of the Massive Health website. Our assignment was to design and build a site that communicated Massive’s commitment to modern personal health. The site had to be visually and interactively stunning while maintaining a usable and clear interface for the casual visitor. Our solution was to extend the infinite company logo into a ribbon that carried the visitor through the site narrative. It also meant we’d be asking the browser to accommodate something it was never designed to handle: a non-linear design. (Be sure to play around. There’s a lot going on under the hood. We were also this close to a ZUI, if WebKit didn’t freak out when pages were scaled beyond 10×.) Despite the apparent and deliberate design simplicity, the techniques necessary to implement it are anything but. From updating the URL to moving the visitor from section to section, we’re firmly in JavaScript territory. And that’s a shame. What can we do? We might not be able to specify these layouts in HTML and CSS just yet, but that doesn’t mean we can’t learn a few new tricks while we wait. Let’s see how close we can come to recreating the privacy icons design, the Massive design, or the Mag+ design without resorting to JavaScript. A horizontally paginated site The first thing we’re going to need is the concept of a page within our HTML document. Using plain old HTML and CSS, we can stack a series of <div>s sideways (with a little assist from our new friend, the viewport-width unit, not that he was strictly necessary). All we need to know is how many pages we have. (And, boy, wouldn’t it be nice to be able to know that without having to predetermine it or use JavaScript?) .window { overflow: hidden; width: 100%; } .pages { width: 200vw; } .page { float: left; overflow: hidden; width: 100vw; } If you look carefully, you’ll see that the conceit we’ll use in the rest of the demos is in place. Despite the document containing multiple pages, only one is visible at any given time. This allows us to keep the user focused on the task (or content) at hand. By the way, you’ll need to use a modern, WebKit-based browser for these demos. I recommend downloading the WebKit nightly builds, Chrome Canary, or being comfortable with setting flags in Chrome. A horizontally paginated site, with transitions Ah, here’s the rub. We have functional navigation, but precious few cues for the user. It’s not much good shoving the visitor around various parts of the document if they don’t get the pleasant whooshing experience of the journey. You might be thinking, what about that new CSS selector, target-something…? Well, my friend, you’re on the right track. Let’s test it. We’re going to need to use a bit of sleight of hand. While we’d like to simply offset the containing element by the number of pages we’re moving (like we did on Massive), CSS alone can’t give us that information, and that means we’re going to need to fake it by expanding and collapsing pages as you navigate. Here are the bits we’re going to need: .page { -webkit-transition: width 1s; // Naturally you're going to want to include all the relevant prefixes here float: left; left: 0; overflow: hidden; position: relative; width: 100vw; } .page:not(:target) { width: 0; } Ah, but we’re not fooling anyone with that trick. As soon as you move beyond a single page, the visitor’s disbelief comes tumbling down when the linear page transitions are unaffected by the distance the pages are allegedly traveling. And you may have already noticed an even more fatal flaw: I secretly linked you to the first page rather than the unadorned URL. If you visit the same page with no URL fragment, you get a blank screen. Sure, we could force a redirect with some server-side trickery, but that feels like cheating. Perhaps if we had the CSS4 subject selector we could apply styles to the parent based on the child being targeted by the URL. We might also need a few more abilities, like determining the total number of pages and having relative sibling selectors (e.g. nth-sibling), but we’d sure be a lot closer. A horizontally paginated site, with transitions – no cheating Well, what other cards can we play? How about the checkbox hack? Sure, it’s a garish trick, but it might be the best we can do today. Check it out. label { cursor: pointer; } input { display: none; } input:not(:checked) + .page { max-height: 100vh; width: 0; } Finally, we can see the first page thanks to the state we are able to set on the appropriate radio button. Of course, now we don’t have URLs, so maybe this isn’t a winning plan after all. While our HTML and CSS toolkit may feel primitive at the moment, we certainly don’t want to sacrifice the addressability of the web. If there’s one bedrock principle, that’s it. A horizontally paginated site, with transitions – no cheating and a gorgeous homepage Gorgeous may not be the right word, but our little magazine is finally shaping up. Thanks to the CSS regions spec, we’ve got an exciting new power, the ability to begin an article in one place and bend it to our will. (Remember, your everyday browser isn’t going to work for these demos. Try the WebKit nightly build to see what we’re talking about.) As with the rest of the examples, we’re clearly abusing these features. Off-canvas layouts (you can thank Luke Wroblewski for the name) are simply not considered to be normal patterns… yet. Here’s a quick look at what’s going on: .excerpt-container { float: left; padding: 2em; position: relative; width: 100%; } .excerpt { height: 16em; } .excerpt_name_article-1, .page-1 .article-flow-region { -webkit-flow-from: article-1; } .article-content_for_article-1 { -webkit-flow-into: article-1; } The regions pattern is comprised of at least three components: a beginning; an ending; and a source. Using CSS, we’re able to define specific elements that should be available for the content to flow through. If magazine-style layouts are something you’re interested in learning more about (and you should be), be sure to check out the great work Adobe has been doing. Looking forward, and backward As designers, builders, and consumers of the web, we share a desire to see the usability and enjoyability of websites continue to rise. We are incredibly lucky to be working in a time when a three-month-old website can be laughably outdated. Our goal ought to be to improve upon both the weaknesses and the strengths of the web platform. We seek not only smoother transitions and larger canvases, but fine-grained addressability. Our URLs should point directly and unambiguously to specific content elements, be they pages, sections, paragraphs or words. Moreover, off-screen design patterns are essential to accommodating and empowering the multitude of devices we use to access the web. We should express the desire that interpage links take advantage of the CSS transitions which have been put to such good effect in every other aspect of our designs. Transitions aren’t just nice to have, they’re table stakes in the highly competitive world of native applications. The tools and technologies we have right now allow us to create smart, beautiful, useful webpages. With a little help, we can begin removing the seams and sutures that bind the web to an earlier, less sophisticated generation. 2012 Nathan Peretic nathanperetic 2012-12-21T00:00:00+00:00 https://24ways.org/2012/infinite-canvas-moving-beyond-the-page/ code
146 Increase Your Font Stacks With Font Matrix Web pages built in plain old HTML and CSS are displayed using only the fonts installed on users’ computers (@font-face implementations excepted). To enable this, CSS provides the font-family property for specifying fonts in order of preference (often known as a font stack). For example: h1 {font-family: 'Egyptienne F', Cambria, Georgia, serif} So in the above rule, headings will be displayed in Egyptienne F. If Egyptienne F is not available then Cambria will be used, failing that Georgia or the final fallback default serif font. This everyday bit of CSS will be common knowledge among all 24 ways readers. It is also a commonly held belief that the only fonts we can rely on being installed on users’ computers are the core web fonts of Arial, Times New Roman, Verdana, Georgia and friends. But is that really true? If you look in the fonts folder of your computer, or even your Mum’s computer, then you are likely to find a whole load of fonts besides the core ones. This is because many software packages automatically install extra typefaces. For example, Office 2003 installs over 100 additional fonts. Admittedly not all of these fonts are particularly refined, and not all are suitable for the Web. However they still do increase your options. The Matrix I have put together a matrix of (western) fonts showing which are installed with Mac and Windows operating systems, which are installed with various versions of Microsoft Office, and which are installed with Adobe Creative Suite. The matrix is available for download as an Excel file and as a CSV. There are no readily available statistics regarding the penetration of Office or Creative Suite, but you can probably take an educated guess based on your knowledge of your readers. The idea of the matrix is that use can use it to help construct your font stack. First of all pick the font you’d really like for your text – this doesn’t have to be in the matrix. Then pick the generic family (serif, sans-serif, cursive, fantasy or monospace) and a font from each of the operating systems. Then pick any suitable fonts from the Office and Creative Suite lists. For example, you may decide your headings should be in the increasingly ubiquitous Clarendon. This is a serif type face. At OS-level the most similar is arguably Georgia. Adobe CS2 comes with Century Old Style which has a similar feel. Century Schoolbook is similar too, and is installed with all versions of Office. Based on this your font stack becomes: font-family: 'Clarendon Std', 'Century Old Style Std', 'Century Schoolbook', Georgia, serif Note the ‘Std’ suffix indicating a ‘standard’ OpenType file, which will normally be your best bet for more esoteric fonts. I’m not suggesting the process of choosing suitable fonts is an easy one. Firstly there are nearly two hundred fonts in the matrix, so learning what each font looks like is tricky and potentially time consuming (if you haven’t got all the fonts installed on a machine to hand you’ll be doing a lot of Googling for previews). And it’s not just as simple as choosing fonts that look similar or have related typographic backgrounds, they need to have similar metrics as well, This is especially true in terms of x-height which gives an indication of how big or small a font looks. Over to You The main point of all this is that there are potentially more fonts to consider than is generally accepted, so branch out a little (carefully and tastefully) and bring a little variety to sites out there. If you come up with any novel font stacks based on this approach, please do blog them (tagged as per the footer) and at some point they could all be combined in one place for everyone to consider. Appendix What about Linux? The only operating systems in the matrix are those from Microsoft and Apple. For completeness, Linux operating systems should be included too, although these are many and varied and very much in a minority, so I omitted them for time being. For the record, some Linux distributions come packaged with Microsoft’s core fonts. Others use the Vera family, and others use the Liberation family which comprises fonts metrically identical to Times New Roman and Arial. Sources The sources of font information for the matrix are as follows: Windows XP SP2 Windows Vista Office 2003 Office 2007 Mac OSX Tiger Mac OSX Leopard (scroll down two thirds) Office 2004 (Mac) by inspecting my Microsoft Office 2004/Office/Fonts folder Office 2008 (Mac) is expected to be as Office 2004 with the addition of the Vista ClearType fonts Creative Suite 2 (see pdf link in first comment) Creative Suite 3 2007 Richard Rutter richardrutter 2007-12-17T00:00:00+00:00 https://24ways.org/2007/increase-your-font-stacks-with-font-matrix/ design
255 Inclusive Considerations When Restyling Form Controls I would like to begin by saying 2018 was the year that we, as developers, visual designers, browser implementers, and inclusive design and experience specialists rallied together and achieved a long-sought goal: We now have the ability to fully style form controls, across all modern browsers, while retaining their ease of declaration, native functionality and accessibility. I would like to begin by saying all these things. However, they’re not true. I think we spent the year debating about what file extension CSS should be written in, or something. Or was that last year? Maybe I’m thinking of next year. Returning to reality, styling form controls is more tricky and time consuming these days rather than flat out “hard”. In fact, depending on the length of the styling-leash a particular browser provides, there are controls you can style quite a bit. As for browsers with shorter leashes, there are other options to force their controls closer to the visual design you’re tasked to match. However, when striving for custom styled controls, one must be careful not to forget about the inherent functionality and accessibility that many provide. People expect and deserve the products and services they use and pay for to work for them. If these services are visually pleasing, but only function for those who fit the handful of personas they’ve been designed for, then we’ve potentially deprived many people the experiences they deserve. Quick level setting Getting down to brass tacks, when creating custom styled form controls that should retain their expected semantics and functionality, we have to consider the following: Many form elements can be styled directly through standard and browser specific selectors, as well as through some clever styling of markup patterns. We should leverage these native options before reinventing any wheels. It is important to preserve the underlying semantics of interactive controls. We must not unintentionally exclude people who use assistive technologies (ATs) that rely on these semantics. Make sure you test what you create. There is a lot of underlying complexity to form controls which may not be immediately apparent if they’re judged solely by their visual presentation in a single browser, or with limited AT testing. Visually resetting and restyling form controls Over the course of 2018, I worked on a project where I tested and reported on the accessibility impact of styling various form controls. In conducting my research, I reviewed many of the form controls available in HTML, testing to see how malleable they were to direct styling from standardized CSS selectors. As I expected, controls such as the various text fields could be restyled rather easily. However, other controls like radio buttons and checkboxes, or sub-elements of special text fields like date, search, and number spinners were resistant to standard-based styling. These particular controls and their sub-elements required specific pseudo-elements to reset and allow for restyling of some of their default presentation. See the Pen form control styling comparisons by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/gZOrZm/ Over the years, the ability to directly style form controls has been something many people have clamored for. However, one should realize the benefits of being able to restyle some of these controls may involve more effort than originally anticipated. If you want to restyle a control from the ground up, then you must also recreate any :active, :focus, and :hover states for the control—all those things that were previously taken care of by browsers. Not only that, but anything you restyle should also work with Windows High Contrast mode, styling for dark mode, and other OS-level settings that browser respect without you even realizing. You ever try playing with the accessibility settings of your display on macOS, or similar Windows setting? It is also worth mentioning that any browser prefixed pseudo-elements are not standardized CSS selectors. As MDN mentions at the top of their pages documenting these pseudo-elements: Non-standard This feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future. While this may be a deterrent for some, it’s my opinion the risks are often only skin-deep. By which I mean if a non-standard selector does change, the control may look a bit quirky, but likely won’t cease to function. A bug report which requires a CSS selector change can be an easy JIRA ticket to close, after all. Can’t make it? Fake it. Internet Explorer 11 (IE11) is still neck-and-neck with other browsers in vying for the number 2 spot in desktop browser share. Due to IE not recognizing vendor-prefixed appearance properties, some essential controls like checkboxes won’t render as intended. Additionally, some controls like select boxes, file uploads, and sub-elements of date fields (calendar popups) cannot be modified by just relying on styling their HTML selectors alone. This means that unless your company designs and develops with a progressive enhancement, or graceful degradation mindset, you’ll need to take a different approach in styling. Getting clever with markup and CSS The following CodePen demonstrates how we can create a custom checkbox markup pattern. By mindfully utilizing CSS sibling selectors and positioning of the native control, we can create custom visual styling while also retaining the functionality and accessibility expectations of a native checkbox. See the Pen Accessible Styled Native Checkbox by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/RqEayN/ Customizing checkboxes by visually hiding the input and styling well-placed markup with sibling selectors may seem old hat to some. However, many variations of these patterns do not take into account how their method of visually hiding the checkboxes can create discovery issues for certain screen reader navigation methods. For instance, if someone is using a mobile device and exploring by touch, how will they be able to drag their finger over an input that has been reduced to a single pixel, or positioned off screen? As we move away from the simplicity of declaring a single HTML element and using clever CSS and markup patterns to create restyled form controls, we increase the need for additional testing to ensure no expected behaviors are lost. In other words, what should work in theory may not work in practice when you introduce the various different ways people may engage with a form control. It’s worth remembering: what might be typical interactions for ourselves may be problematic if not impossible for others. Limitations to cleverness Creative coding will allow us to apply more consistent custom styles to some of the more problematic form controls. There will be a varied amount of custom markup, CSS, and sometimes JavaScript that will be needed to preserve the control’s inherent usability and accessibility for each control we take this approach to. However, this method of restyling still doesn’t solve for the lack of feature parity across different browsers. Nor is it a means to account for controls which don’t have a native HTML element equivalent, such as a switch or multi-thumb range slider? Maybe there’s a control that calls for a visual design or proposed user experience that would require too much fighting with a native control’s behavior to be worth the level of effort to implement. Here’s where we need to take another approach. Using ARIA when appropriate Sometimes we have no other option than to roll up our sleeves and start building custom form controls from scratch. Fair warning though: just because we’re not leveraging a native HTML control as our foundation, it doesn’t mean we have carte blanche to throw semantics out the window. Enter Accessible Rich Internet Applications (ARIA). ARIA is a set of attributes that can modify existing elements, or extend HTML to include roles, properties and states that aren’t native to the language. While divs and spans have no meaningful semantic information for us to leverage, with help from the ARIA specification and ARIA Authoring Practices we can incorporate these elements to help create the UI that we need while still following the first rule of Using ARIA: If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so. By using these documents as guidelines, and testing our custom controls with people of various abilities, we can do our best to make sure a custom control performs as expected for as many people as possible. Exceptions to the rule One example of a control that allows for an exception to the first rule of Using ARIA would be a switch control. Switches and checkboxes are similar components, in that they have both on/checked and off/unchecked states. However, checkboxes are often expected within the context of forms, or used to filter search queries on e-commerce sites. Switches are typically used to instantly enable or deactivate a particular setting at a component or app-based level, as this is their behavior in the native mobile apps in which they were popularized. While a switch control could be created by visually restyling a checkbox, this does not automatically mean that the underlying semantics and functionality will match the visual representation of the control. For example, the following CodePen restyles checkboxes to look like a switch control, but the semantics of the checkboxes remain which communicate a different way of interacting with the control than what you might expect from a native switch control. See the Pen Switch Boxes - custom styled checkboxes posing as switches by Scott (@scottohara) on CodePen. https://codepen.io/scottohara/pen/XyvoeE/ By adding a role="switch" to these checkboxes, we can repurpose the inherent checked/unchecked states of the native control, it’s inherent ability to be focused by Tab key, and Space key to toggle state. But while this is a valid approach to take in building a switch, how does this actually match up to reality? Does it pass the test(s)? Whether deconstructing form controls to fully restyle them, or leveraging them and other HTML elements as a base to expand on, or create, a non-native form control, building it is just the start. We must test that what we’ve restyled or rebuilt works the way people expect it to, if not better. What we must do here is run a gamut of comparative tests to document the functionality and usability of native form controls. For example: Is the control implemented in all supported browsers? If not: where are the gaps? Will it be necessary to implement a custom solution for the situations that degrade to a standard text field? If so: is each browser’s implementation a good user experience? Is there room for improvement that can be tested against the native baseline? Test with multiple input devices. Where the control is implemented, what is the quality of the user experience when using different input devices, such as mouse, touchscreen, keyboard, speech recognition or switch device, to name a few. You’ll find some HTML5 controls (like date pickers and number spinners) have additional UI elements that may not be announced to AT, or even allow keyboard accessibility. Often these controls can be adjusted by other means, such as text entry, or using arrow keys to increase or decrease values. If restyling or recreating a custom version of a control like these, it may make sense to maintain these native experiences as well. How well does the control take to custom styles? If a control can be styled enough to not need to be rebuilt from scratch, that’s great! But make sure that there are no adverse affects on the accessibility of it. For instance, range sliders can be restyled and maintain their functionality and accessibility. However, elements like progress bars can be negatively affected by direct styling. Always test with different browser and AT pairings to ensure nothing is lost when controls are restyled. Do specifications match reality? If recreating controls to get around native limitations, such as the inability to style the options of a select element, or requiring a Switch control which is not native to HTML, do your solutions match user expectations? For instance, selects have unique picker interfaces on touch devices. And switches have varied levels of support for different browser and screen reader pairings. Test with real people, and check your analytics. If these experiences don’t match people’s expectations, then maybe another solution is in order? Wrapping up While styling form controls is definitely easier than it’s ever been, that doesn’t mean that it’s at all simple, nor will it likely ever be. The level of difficulty you’re going to face is going to depend entirely on what it is you’re hoping to style, add-on to, or recreate. And even if you build your custom control exactly to specification, you’ll still be reliant on browsers and assistive technologies being able to fully understand the component they’ve been presented. Forms and their controls are an incredibly important part of what we need the Internet for. Paying bills, scheduling appointments, ordering groceries, renewing your license or even ordering gifts for the holidays. These are all important tasks that people should be able to complete with as little effort as possible. Especially since for some, completing these tasks online might be their only option. 2018 didn’t end up being the year we got full customization of form controls sorted out. But that’s OK. If we can continue to mindfully work with what we have, and instead challenge ourselves to follow inclusive design principles, well thought out Form Design Patterns, and solve problems with an accessibility first approach, we may come to realize that we can get along just fine without fully branded drop downs. And hey. There’s always next year, right? 2018 Scott O'Hara scottohara 2018-12-13T00:00:00+00:00 https://24ways.org/2018/inclusive-considerations-when-restyling-form-controls/ code
169 Incite A Riot Given its relatively limited scope, HTML can be remarkably expressive. With a bit of lateral thinking, we can mark up content such as tag clouds and progress meters, even when we don’t have explicit HTML elements for those patterns. Suppose we want to mark up a short conversation: Alice: I think Eve is watching. Bob: This isn’t a cryptography tutorial …we’re in the wrong example! A note in the the HTML 4.01 spec says it’s okay to use a definition list: Another application of DL, for example, is for marking up dialogues, with each DT naming a speaker, and each DD containing his or her words. That would give us: <dl> <dt>Alice</dt>: <dd>I think Eve is watching.</dd> <dt>Bob</dt>: <dd>This isn't a cryptography tutorial ...we're in the wrong example!</dd> </dl> This usage of a definition list is proof that writing W3C specifications and smoking crack are not mutually exclusive activities. “I think Eve is watching” is not a definition of “Alice.” If you (ab)use a definition list in this way, Norm will hunt you down. The conversation problem was revisited in HTML5. What if dt and dd didn’t always mean “definition title” and “definition description”? A new element was forged: dialog. Now the the “d” in dt and dd doesn’t stand for “definition”, it stands for “dialog” (or “dialogue” if you can spell): <dialog> <dt>Alice</dt>: <dd>I think Eve is watching.</dd> <dt>Bob</dt>: <dd>This isn't a cryptography tutorial ...we're in the wrong example!</dd> </dialog> Problem solved …except that dialog is no longer in the HTML5 spec. Hixie further expanded the meaning of dt and dd so that they could be used inside details (which makes sense—it starts with a “d”) and figure (…um). At the same time as the content model of details and figure were being updated, the completely-unrelated dialog element was dropped. Back to the drawing board, or in this case, the HTML 4.01 specification. The spec defines the cite element thusly: Contains a citation or a reference to other sources. Perfect! There’s even an example showing how this can applied when attributing quotes to people: As <CITE>Harry S. Truman</CITE> said, <Q lang="en-us">The buck stops here.</Q> For longer quotes, the blockquote element might be more appropriate. In a conversation, where the order matters, I think an ordered list would make a good containing element for this pattern: <ol> <li><cite>Alice</cite>: <q>I think Eve is watching.</q></li> <li><cite>Bob</cite>: <q>This isn't a cryptography tutorial ...we're in the wrong example!</q></li> </ol> Problem solved …except that the cite element has been redefined in the HTML5 spec: The cite element represents the title of a work … A person’s name is not the title of a work … and the element must therefore not be used to mark up people’s names. HTML5 is supposed to be backwards compatible with previous versions of HTML, yet here we have a semantic pattern already defined in HTML 4.01 that is now non-conforming in HTML5. The entire justification for the change boils down to this line of reasoning: Given that: titles of works are often italicised and given that: people’s names are not often italicised and given that: most browsers italicise the contents of the cite element, therefore: the cite element should not be used to mark up people’s names. In other words, the default browser styling is now dictating semantic meaning. The tail is wagging the dog. Not to worry, the HTML5 spec tells us how we can mark up names in conversations without using the cite element: In some cases, the b element might be appropriate for names I believe the colloquial response to this is a combination of the letters W, T and F, followed by a question mark. The non-normative note continues: In other cases, if an element is really needed, the span element can be used. This is not a joke. We are seriously being told to use semantically meaningless elements to mark up content that is semantically meaningful. We don’t have to take it. Firstly, any conformance checker—that’s the new politically correct term for “validator”—cannot possibly check every instance of the cite element to see if it’s really the title of a work and not the name of a person. So we can disobey the specification without fear of invalidating our documents. Secondly, Hixie has repeatedly stated that browser makers have a powerful voice in deciding what goes into the HTML5 spec; if a browser maker refuses to implement a feature, then that feature should come out of the spec because otherwise, the spec is fiction. Well, one of the design principles of HTML5 is the Priority of Constituencies: In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. That places us—authors—above browser makers. If we resolutely refuse to implement part of the HTML5 spec, then the spec becomes fiction. Join me in a campaign of civil disobedience against the unnecessarily restrictive, backwards-incompatible change to the cite element. Start using HTML5 but start using it sensibly. Let’s ensure that bad advice remains fictitious. Tantek has set up a page on the WHATWG wiki to document usage of the cite element for conversations. Please contribute to it. 2009 Jeremy Keith jeremykeith 2009-12-11T00:00:00+00:00 https://24ways.org/2009/incite-a-riot/ code
19 In Their Own Write: Web Books and their Authors The currency of written communication — words on the page, words on the screen — comprises many denominations. To further our ends in web design and development, we freely spend and receive several: tweets aphoristic and trenchant, banal and perfunctory; blog posts and articles that call us to action or reflection; anecdotes, asides, comments, essays, guides, how-tos, manuals, musings, notes, opinions, stories, thoughts, tips pro and not-so-pro. So many, many words. Our industry (so much more than this, but what on earth are we, collectively?), our community thrives on writing and sharing knowledge and experience. 24 ways is a case in point. Everyone can learn and contribute through reading and writing — it’s what we’ve always done. To web authors and readers seeking greater returns, though, broader culture has vouchsafed an enduring and singular artefact: the book. Last month I asked a small sample of web book authors if they would be prepared to answer a few questions; most of them kindly agreed. In spirit, the survey was informal: I had neither hypothesis nor unground axe. I work closely with writers — and yes, I’ve edited or copy-edited books by several of the authors I surveyed — and wanted to share their thoughts about what it was like to write a book (“…it was challenging to find a coherent narrative”), why they did it (“Who wouldn’t want to?”) and what they learned from the experience (“That I could!”). Reasons for writing a book In web development the connection between authors and readers is unusually close and immediate. Working in our medium precipitates a unity that’s rare elsewhere. Yet writing and publishing a book, even during the current books revolution, is something only a few of us attempt and it remains daunting and a little remote. What spurs an author to try it? For some, it’s a deeply held resistance to prevailing trends: I felt that designers and developers needed to be shaken out of what seemed to me had been years of stagnation. —Andrew Clarke Or even a desire to protect us from ourselves: I felt that without a book that clearly defined progressive enhancement in a very approachable and succinct fashion, the web was at risk. I was seeing Tim Berners-Lee’s vision of universal availability slip away… —Aaron Gustafson Sometimes, there’s a knowledge gap to be filled by an author with the requisite excitement and need to communicate. Jon Hicks took his “pet subject” and was “enthused enough to want to spend all that time writing”, particularly because: …there was a gap in the market for it. No one had done it before, and it’s still on its own out there, with no competition. It felt like I was able to contribute something. Cennydd Bowles felt a professional itch at a particular point in his career, understanding that [a]s a designer becomes more senior, they start looking for ways to scale the effects of their work. For some, that leads into management. For others, into writing. Often, though, it’s also simply a personal challenge and ambition to explore a subject at length and create something substantial. Anna Debenham describes a motivation shared by several authors: To be able to point to something more tangible than an article and be able to say “I did that.” That sense of a book’s significance, its heft and gravity even, stems partly from the cultural esteem which honours books and their authors. Books have a long history as sources of wisdom, truth and power. Even with more books being published each year than ever before, writing one is still commonly considered a laudable achievement, including in our field. Challenges of writing a book Received wisdom has it that writing online should be brief and chunky and approachable: get to the point; divide it all up; subheadings and lists are our friends; write like you’re talking; no one has time to read. Much of such advice is true. Followed well, it lends our writing punch and pith, vigour and vim. The web is nimble, the web keeps up, and it suits what we write about developing for it. It’s perfect for delivering our observations, queries and investigations into all the various aspects of the work, professional and personal. Yet even for digital natives like web authors, books printed and electronic retain an attractive glister. Ideas can be developed more fully, their consequences explored to greater depth and extended with more varied examples, and the whole conveyed with more eloquence, more style. Why shouldn’t authors delay their conclusions if the intervening text is apposite, rich with value and helps to flesh out the skeleton of an argument? Conclusions might or might not be reached, of course, but a writer is at greater liberty in a book to digress in tangential and interesting ways. Writing a book involves committing time, energy, thought and money. As Brian Suda found, it can be tough “getting the ideas out of my head into a cohesive blob of text.” Some authors end up talking to themselves… It helps me to keep a real person in mind, someone who I’m talking to as I write. Sometimes I have the same conversations over and over in my head. —Andrew Clarke …while others are thinking ahead, concerned with how their book will be received: Would anyone want to read it? Would they care? Would it be respected by my peers? —Joe Leech Challenges that arose time and again included “starting” and “getting words on the page” as well as “knowing when to stop” or “letting go”. Personal organization problems and those caused by publishers were also widely mentioned. Time loomed large. Making time, finding time. Giving up “sleep and some sanity” and realizing “it will take you far, far, far longer than you naively assumed”. Importantly, writing time is time away from gainful employment: Aaron Gustafson found the hardest thing about writing a book to be “the loss of income while I was writing.” Perils and pleasures of editing Editing, be it structural, technical or copy editing, is founded on reciprocity. Without openness and a shared belief that the book is worthwhile, work can founder in acrimony and mistrust. Editors are a book’s first and most critical (in every sense) readers. Effective and perceptive editing makes a book as good as it can be, finding the book within the draft like sculpture reveals the statue in the stone. A good editor calls you out on poor assumptions and challenges you to really clarify your thinking. Whilst it can be difficult during the process to have your thinking challenged, it’s always been worth it — for me personally — in the long run. A good editor also reins you in when you’ve perhaps wandered off track or taken a little too long to make a point. —Christopher Murphy Andy Croll found editing “all positive” and Aaron Gustafson loves “working with a strong editor […] I want someone to tell it to me straight.” But it can be a rollercoaster, “both terrifying and the real moment of elation”. Mixed emotions during the editing process are common: It was very uncomfortable! I knew it was making the work stronger, but it was awkward having my inconsistencies and waffle picked apart. —Jon Hicks It can be distressing to have written work looked over by a professional, particularly for first-time book authors whose expertise lies elsewhere: I was a little nervous because I don’t consider myself a skilled writer — I never dreamed of becoming an author. I’m a designer, after all. —Geri Coady Communication is key, particularly when it comes to checking or changing the author’s words. I like a good banter between me and the tech editor — if we can have a proper argument in Word comments, that’s great. —Rachel Andrew But if handled poorly, small battles can break out. Rachel Andrew again: However, having had plenty of times where the technical editor has done nothing more than give a cursory glance, I started to leave little issues in for them to spot. If they picked them up I knew they were actually testing the code and I could be sure the work was being properly tech edited. If they didn’t spot them, I’d find someone myself to read through and check it! A major concern for writers is that their voices will be altered, filtered, mangled or otherwise obscured by the editing process. Good copy editing must remain unnoticed while enhancing the author’s voice in print. Donna Spencer appreciated the way her editor “tidied up my work and made it a million times better, but left it sounding exactly like me.” Similarly, Andrew Travers “was incredibly impressed at how well my editor tightened up my own writing without it feeling like another’s voice” and Val Head sums up the consensus that: the editor was able to help me express what I was trying to say in a better way […] I want to have editors for everything now. At the keyboard, keep your friends close, but your editors closer. Publishing and publishers Conditions ought to militate against the allure of writing a book about web design and development. More books are published each year than ever before, so readerships elude new authors and readers can struggle to find authors to trust in their fields of interest. New spaces for more expansive online writing about working on and with the web are opening up (sites like Contents Magazine and STET), and seminal online web development texts are emerging. Publishing online is simple, far-reaching and immediate. Much more so than articles and blog posts, books take time to research, write and read; add the complexity of commissioning, editing, designing, proofreading, printing, marketing and distribution processes, and it can take many months, even years to publish. The ceaseless headlong momentum of the web can leave articles more than a few weeks old whimpering in its wake, but updating them at least is straightforward; printed books about web development can depreciate as rapidly as the technology and techniques they describe, while retaining the “terrifying permanence that print bestows: your opinions will follow you forever”. So much moves on, and becomes out of date. Companies featured get bought by larger companies and die, techniques improve and solutions featured become terribly out of date. Unlike a website, which could be updated continuously, a book represents the thinking ‘at that time’. —Jon Hicks Publishers work hard to mitigate these issues, promoting new books and new authors, bringing authors and readers together under a trusted banner. When a publisher packages up and releases a writer’s words, it confers a seal of approval and “badge of quality”, very important to new authors. Publishers have other benefits to offer, from expert knowledge: My publisher was extraordinarily supportive (and patient). Her expertise in my chosen subject was both a pressure (I didn’t want to let her down) and a reassurance (if she liked it, I knew it was going to be fine). —Andrew Travers …to systems and support mechanisms set up specifically to encourage writers and publish books: Working as a team means you’re bringing in everyone’s expertise. —Chui Chui Tan As a writer, the best part about writing for a publisher was the writing infrastructure offered. —Christopher Murphy There can be drawbacks, however, and the occasional horror story: We were just one small package on a huge conveyor belt. The publisher’s process ruled all. —Cennydd Bowles It’s only looking back I realise how poorly some publishers treat writers — especially when the work is so poorly remunerated.My worst experience was when a publisher decided, after I had completed the book, that they wanted to push a different take on the subject than the brief I had been given. Instead of talking to me, they rewrote chunks of my words, turning my advice into something that I would never have encouraged. Ultimately, I refused to let the book go out under my name alone, and I also didn’t really promote the book as I would have had to point out the things I did not agree with that had been inserted! —Rachel Andrew Self-publishing is now a realistic option for web authors, and can offers “complete control over the end product” as well as the possibility of earning more than a “pathetic author revenue percentage”. There can be substantial barriers, of course, as self-publishing authors must face for themselves the risks and challenges conventional publishers usually bear. Ideally, creating a book is a collaboration between author and publisher. Geri Coady found that “working with my publisher felt more like working with a partner or co-worker, rather than working for a boss.” Wise words So, after meeting the personal costs of writing and publishing a web book — fear, uncertainty, doubt, typing (so much typing) — and then smelling the roses of success, what’s left for an author to say? Some words, perhaps, to people thinking of writing a book. Donna Spencer identifies a stumbling block common to many writers with an insight into the writing process: Having talked to a lot of potential authors, I think most have the problem that they haven’t actually figured out the ‘answer’ to their premise yet. They feel like they are stuck in the writing, but they are actually stuck in the thinking. For some no-nonsense, straightforward advice to cut through any anxiety or inadequacy, Rachel Andrew encourages authors to “treat it like any other work. There is no mystery to writing, you just have to write. Schedule the time, sit down, write words.” Tim Brown notes the importance of the editing process to refine a book and help authors reach their readers: Hire good editors. Editors are amazing thinkers who can vastly improve the quality and clarity of a piece of writing. We are too much beholden to the practical demands and challenges of technology, so Aaron Gustafson suggests a writer should “favor philosophies over techniques and your book will have a longer shelf life.” Most intimations of renown and recognition are nipped in the bud by Joe Leech’s warning: “Don’t expect fame and fortune.” Although Cennydd Bowles’ bitter experience can be discouraging: The sacrifices required are immense. You probably won’t make it. …he would do things differently for a future book: I would approach the book with […] far more concern about conveying the damn joy of what I do for a living. The pleasure of writing, not just having written is captured by James Chudley when he recalls: How much I enjoy writing and also how much I enjoy the discipline or having a side project like this. It’s a really good supplement to working life. And Jon Hicks has words that any author will find comforting: It will be fine. Everything will be fine. Just get on with it! As the web expands effortlessly and ceaselessly to make room for all our words, yet it can also discourage the accumulation of any particular theme in one space, dividing rich seams and scattering knowledge across the web’s surface and into its deepest reaches. How many words become weightless and insubstantial, signals lost in the constant white noise of indistinguishable voices, unloved, unlinked? The web forgets constantly, despite the (somewhat empty) promise of digital preservation: articles and data are sacrificed to expediency, profit and apathy; online attention, acknowledgement and interest wax and wane in days, hours even. Books can encourage deeper engagement in readers, and foster faith in an author, particularly if released under the imprint of a recognized publisher within the field. And books are changing. Although still not widely adopted, EPUB3 is the new standard in ebooks, bringing with it new possibilities for interaction and connection: readers with the text; readers with readers; and readers with authors. EPUB3 is built on HTML, CSS and JavaScript — sound familiar? In the past, we took what we could from the printed page to make the web; now books are rubbing up against what we’ve made. So: a book. Ever thought you could write one? Should write one? Would? I’d like to thank all the authors who wrote their books and answered my questions. Rachel Andrew · CSS3 Layout Modules, The CSS3 Anthology and more Cennydd Bowles · Undercover User Experience Design, with James Box Tim Brown · Combining Typefaces James Chudley · Usability of Web Photos Andrew Clarke · Hardboiled Web Design Geri Coady · Colour Accessibility Andy Croll · HTML Email Anna Debenham · Front-end Style Guides Aaron Gustafson · Adaptive Web Design Val Head · CSS Animations Jon Hicks · The Icon Handbook Joe Leech · Psychology for Designers Christopher Murphy · The Craft of Words, with Niklas Persson Donna Spencer · Information Architecture, Card Sorting and How to Write Great Copy for the Web Brian Suda · Designing with Data Chui Chui Tan · International User Research Andrew Travers · Interviewing for Research 2013 Owen Gregory owengregory 2013-12-15T00:00:00+00:00 https://24ways.org/2013/web-books/ content
327 Improving Form Accessibility with DOM Scripting The form label element is an incredibly useful little element – it lets you link the form field unquestionably with the descriptive label text that sits alongside or above it. This is a very useful feature for people using screen readers, but there are some problems with this element. What happens if you have one piece of data that, for various reasons (validation, the way your data is collected/stored etc), needs to be collected using several form elements? The classic example is date of birth – ideally, you’ll ask for the date of birth once but you may have three inputs, one each for day, month and year, that you also need to provide hints about the format required. The problem is that to be truly accessible you need to label each field. So you end up needing something to say “this is a date of birth”, “this is the day field”, “this is the month field” and “this is the day field”. Seems like overkill, doesn’t it? And it can uglify a form no end. There are various ways that you can approach it (and I think I’ve seen them all). Some people omit the label and rely on the title attribute to help the user through; others put text in a label but make the text 1 pixel high and merging in to the background so that screen readers can still get that information. The most common method, though, is simply to set the label to not display at all using the CSS display:none property/value pairing (a technique which, for the time being, seems to work on most screen readers). But perhaps we can do more with this? The technique I am suggesting as another alternative is as follows (here comes the pseudo-code): Start with a totally valid and accessible form Ensure that each form input has a label that is linked to its related form control Apply a class to any label that you don’t want to be visible (for example superfluous) Then, through the magic of unobtrusive JavaScript/the DOM, manipulate the page as follows once the page has loaded: Find all the label elements that are marked as superfluous and hide them Find out what input element each of these label elements is related to Then apply a hint about formatting required for input (gleaned from the original, now-hidden label text) – add it to the form input as default text Finally, add in a behaviour that clears or selects the default text (as you choose) So, here’s the theory put into practice – a date of birth, grouped using a fieldset, and with the behaviours added in using DOM, and here’s the JavaScript that does the heavy lifting. But why not just use display:none? As demonstrated at Juicy Studio, display:none seems to work quite well for hiding label elements. So why use a sledge hammer to crack a nut? In all honesty, this is something of an experiment, but consider the following: Using the DOM, you can add extra levels of help, potentially across a whole form – or even range of forms – without necessarily increasing your markup (it goes beyond simply hiding labels) Screen readers today may identify a label that is set not to display, but they may not in the future – this might provide a way around By expanding this technique above, it might be possible to visually change the parent container that groups these items – in this case, a fieldset and legend, which are notoriously difficult to style consistently across different browsers – while still retaining the underlying semantic/logical structure Well, it’s an idea to think about at least. How is it for you? How else might you use DOM scripting to improve the accessiblity or usability of your forms? 2005 Ian Lloyd ianlloyd 2005-12-03T00:00:00+00:00 https://24ways.org/2005/improving-form-accessibility-with-dom-scripting/ code
189 Ignorance Is Bliss This is a true story. Meet Mike Mike’s a smart guy. He knows a great browser when he sees one. He uses Firefox on his Windows PC at work and Safari on his Mac at home. Mike asked us to design a Web site for his business. So we did. We wanted to make the best Web site for Mike that we could, so we used all of the CSS tools that are available today. That meant using RGBa colour to layer elements, border-radius to add subtle rounded corners and (possibly most experimental of all new CSS), generated gradients. The home page Mike sees in Safari on his Mac Mike loves what he sees. Meet Sam Sam works with Mike. She uses Internet Explorer 7 because it came on the Windows laptop that the company bought her when she joined. The home page Sam sees in Internet Explorer 7 on her PC Sam loves the new Web site too. How could both of them be happy when they experienced the Web site differently? The new WYSIWYG When I first presented my designs to Mike and Sam, I showed them a Web page made with HTML and CSS in their respective browsers and not a picture of a Web page. By showing neither a static image of my design, I set none of the false expectations that, by definition, a static Photoshop or Fireworks visual would have established. Mike saw rounded corners and subtle shadows in Firefox and Safari. Sam saw something equally as nice, just a little different, in Internet Explorer. Both were very happy because they saw something that they liked. Neither knew, or needed to know, about the subtle differences between browsers. Their users don’t need to know either. That’s because in the real world, people using the Web don’t find a Web site that they like, then open up another browser to check that it looks they same. They simply buy what they came to buy, read what what they came to read, do what they came to do, then get on with their lives in blissful ignorance of what they might be seeing in another browser. Often when I talk or write about using progressive CSS, people ask me, “How do you convince clients to let you work that way? What’s your secret?” Secret? I tell them what they need to know, on a need-to-know basis. Epilogue Sam has a new iPhone that Mike bought for her as a reward for achieving her sales targets. She loves her iPhone and was surprised at just how fast and good-looking the company Web site appears on that. So she asked, “Andy, I didn’t know you optimised our site for mobile. I don’t remember seeing an invoice for that.” I smiled. “That one was on the house.” 2009 Andy Clarke andyclarke 2009-12-23T00:00:00+00:00 https://24ways.org/2009/ignorance-is-bliss/ business
9 How to Write a Book Were you recently inspired to write a book after reading Owen Gregory’s compendium of author insights? Maybe so inspired to strike out on your own and self-publish? Based on personal experience, writing a book is hard. It requires a great deal of research, experience, and patience. To be able to consolidate your thoughts and what you’ve learned into a sensible and readable tome is an admirable feat. To decide to self-publish and take on yourself all of the design, printing, distribution, and so much more is tantamount to insanity. Again, based on personal experience. So, why might you want to self-publish? If you’ve spent many a late night doing cross-browser testing just to know that your site works flawlessly in twenty-four different browsers — including Mosaic, of course — then maybe you’ll understand the fun that comes from doing it all. Working with a publisher, you’re left to focus on one core thing: writing. That’s a good thing. A good publisher has the right resources to help you get your idea polished and the distribution network to get your book on store shelves around the world. It’s a very proud moment to be able to walk into a book store and see your book sitting there on the shelf. Self-publishing can also be a wonderful process as you get to own it from beginning to end. Every decision is yours and if you’re a control freak like me, this can be a very rewarding experience. While there are many aspects to self-publishing, I’m going to speak to just one of them: creating an ebook. Formats In creating an ebook, you first need to decide what formats you wish to support. There are three main formats, each with their own pros and cons: PDF EPUB MOBI PDFs are supported on almost every device (Windows, Mac, Kindle, iPad, Android, etc.) and can even be a stepping stone to creating a print version of your book. PDFs allow for full typographic and design control, but at the cost of needing to fit things into a predefined page layout. Is it US Letter or A4? Or is it a format that isn’t easily printed by readers on their home printers? EPUB is a more fluid format that is supported by the Apple iPad, iPhone, and now on the desktop with OS X Mavericks. It’s also supported by Google Play for Android devices. While EPUB is supported on other devices, you’re likely to choose EPUB because you’re targeting your book at the Apple audience. The EPUB format is HTML-based with support for some CSS and even video and interactive elements. You can create very rich and exciting experiences using the EPUB format that just aren’t possible with PDF or MOBI. However, if you decide to support multiple file formats, you’ll likely find — as I did — that a consistent experience between all formats is easier to build and maintain, and therefore the extra benefits of interactivity go out the window. MOBI is a format originally developed for the Mobipocket Reader but more popularly supported by the Amazon Kindle. If you’re looking to attract the Kindle audience or publish to Amazon via the Kindle Direct Publishing platform then the HTML-based MOBI format is the format you’ll want to go with. Distribution will probably factor in heavily with what format you decide to go with. Many people I know who self-publish go with PDF only due to its ubiquity. If you want to garner a wider audience by distributing via Amazon or the iBookstore then you’ll need to think about supporting all three formats (as I did). What tools should I use? I spent a lot of time figuring out the right toolset and finally got something that suits me just right. In the past, when working with a publisher, I was given a Microsoft Word template that was passed back and forth between myself, the editor, and tech reviewer. This template has been the bane of any book writer that I’ve spoken to. Not every publisher is like that, though. Some publishers, like O’Reilly, use DocBook, an XML-based format that can be converted into PDF, EPUB, and MOBI. Publishers already have a style guide and whether it’s DocBook or a Word template, they have the tools already in place to easily convert your work into multiple formats. Self-publishing means that you’ll likely have to do a lot of tweaking to get things looking and working the way you want them to. I tried DocBook and the open source export tools didn’t create HTML to my liking. Fixing even the most mundane things required fiddling with XSL transformations for hours on end. Not the way I like to spend my time. I can only imagine the hoops I would’ve had to go through to get a PDF to look half-decent. Tools like Pages or Scrivener offer up the ability to publish to multiple formats, too, but none offered me the control over the output that I truly desired. Have a mentioned that I’m a control freak? I ended up writing my book using a technology that I already knew quite well: HTML. By writing in HTML, I already had something that I could post on my website, use for the EPUB and use for the MOBI format. All without having to change a thing. (That’s right: the same HTML that is used on SMACSS.com is used in the EPUB and is used in the MOBI.) What about PDF? I could open up the HTML in a web browser, choose Save as PDF and be done with it but let’s face it: the filename and date attached to every single page doesn’t exactly scream professional. Web browsers actually do a surprisingly poor job with supporting the CSS paged media spec. I had resorted to copying and pasting the content into Pages and saving as PDF from there. It wasn’t elegant but it worked. However, any changes to my HTML source required redoing those changes in Pages, as well. Then I met my Prince Charming: Prince XML. It’s pricey but it works incredibly well. It takes HTML and CSS (that very format I’ve been using for all of my other file formats) and will generate a PDF via a command line interface. Prince supports CSS paged media including headers, footers, page counts, and alternating page styles. From one format, HTML, I can now easily publish to PDF, MOBI, and EPUB, and even my website. I use the PDF version to send to the printer along with cover art to be bound and ready to ship around the world. It’s amazing how versatile HTML (and CSS) is. To learn more about writing books with HTML and CSS, I recommend reading Building Books with CSS3 over at A List Apart. Creating an EPUB Let’s take a step back. Prince gets us from HTML to PDF but how do we make an EPUB out of the HTML? An EPUB file is essentially a ZIP file with a renamed extension. There are some core files that you need to start with: Root META-INF container.xml mimetype content.opf toc.ncx After that, you can start adding your content to the project. Be sure to update the toc.ncx (Table of Contents) and content.opf (the ebook manifest) with any changes you make to your project. You can learn more about the file formats with the EPUB Format Construction Guide. Once all your files are in place, you’ll need to create the EPUB file by running two commands (on OS X, at least): zip -X0 your-ebook.epub mimetype zip -Xur9D your-ebook.epub * The mimetype needs to be the first file inside the ZIP file and therefore gets added first. Then, the rest of the files are added. I’ve added a function to my .bash_profile to make this even easier: function epub() { zip -q0X $@ mimetype; zip -qXr9D $@ * } Then, within the folder from which I want to create an ebook, I just run epub your-ebook.epub from the Terminal command line and the EPUB file should be ready to go. Creating the MOBI We have our EPUB and we have our PDF. The last step is the MOBI file. For this, I call upon Calibre. Calibre can be used as a reader and as a library but I use it exclusively to export my EPUB files to MOBI. Calibre includes a command line utility to convert from EPUB to MOBI. (To install the command line tools, go to Preferences > Advanced > Miscellaneous and click Install Command Line Tools.) ebook-convert your-ebook.epub your-ebook.mobi Spread the joy Now that you have all of your different file formats, you need to get them into the hands of people who want to (ho-ho-hopefully) buy your book! There are a number of marketplaces such as Amazon’s Kindle Direct Publishing, iBookstore, Google Play, and NOOK Press. Some publishers, like PragProg and O’Reilly will also add self-published books to their roster if they feel it’s a good fit for their audience. With any distribution, you’ll have to give up a percentage of your sales—from 30% to 70% of each sale, so consider your options wisely. Of course, you can always open your own online store and reap as much of the revenue as possible, assuming you can get the traffic to your site. Handling your own distribution allows you to create a deeper one-on-one connection with your customers, something that is impossible with other distribution channels since you don’t get customer information through other services—even though you are giving them a huge chunk of your sales! Go forth and prosper There’s a lot of thought and time that goes into writing a book and just as much thought and time can go into creating, publishing, and marketing your book once you’re done. In the end, self-publishing can be a very rewarding process and well worth the time that goes into it. 2013 Jonathan Snook jonathansnook 2013-12-19T00:00:00+00:00 https://24ways.org/2013/how-to-write-a-book/ content
248 How to Use Audio on the Web I know what you’re thinking. I never never want to hear sound anywhere near a browser, ever ever, wow! 🙉 You’re having flashbacks, flashbacks to the days of yore, when we had a <bgsound> element and yup did everyone think that was the most rad thing since <blink>. I mean put those two together with a <marquee>, only use CSS colour names, make sure your borders were all set to ridge and you’ve got yourself the neatest website since 1998. The sound played when the website loaded and you could play a MIDI file as well! Everyone could hear that wicked digital track you chose. Oh, surfing was gnarly back then. Yes it is 2018, the end of in fact, soon to be 2019. We are certainly living in the future. Hoverboards self driving cars, holodecks VR headsets, rocket boots drone racing, sound on websites get real, Ruth. We can’t help but be jaded, even though the <bgsound> element is depreciated, and the autoplay policy appeared this year. Although still in it’s infancy, the policy “controls when video and audio is allowed to autoplay”, which should reduce the somewhat obtrusive playing of sound when a website or app loads in the future. But then of course comes the question, having lived in a muted present for so long, where and why would you use audio? ✨ Showcase Time ✨ There are some incredible uses of audio on websites today. This is my personal favourite futurelibrary.no, a site from Norway chronicling books that have been published from a forest of trees planted precisely for the books themselves. The sound effects are lovely, adding to the overall experience. futurelibrary.no Another site that executes this well is pottermore.com. The Hogwarts WebGL simulation uses both sound effects and ambient background music and gives a great experience. The button hovers are particularly good. pottermore.com Eighty-six and a half years is a beautiful narrative site, documenting the musings of an eighty-six and a half year old man. The background music playing on this site is not offensive, it adds to the experience. Eighty-six and a half years Sound can be powerful and in some cases useful. Last year I wrote about using them to help validate forms. Audiochart is a library which “allows the user to explore charts on web pages using sound and the keyboard”. Ben Byford recorded voice descriptions of the pages on his website for playback should you need or want it. There is a whole area of accessibility to be explored here. Then there’s education. Fancy beginning with some piano in the new year? flowkey.com is a website which allows you to play along and learn at the same time. Need to brush up on your music theory? lightnote.co takes you through lessons to do just that, all audio enhanced. Electronic music more your thing? Ableton has your back with learningmusic.ableton.com, a site which takes you through the process of composing electronic music. A website, all made possible through the powers with have with the Web Audio API today. lightnote.co learningmusic.ableton.com Considerations Yes, tis the season, let’s be more thoughtful about our audios. There are some user experience patterns to begin with. 86andahalfyears.com tells the user they are about to ‘enter’ the site and headphones are recommended. This is a good approach because it a) deals with the autoplay policy (audio needs to be instigated by a user gesture) and b) by stating headphones are recommended you are setting the users expectations, they will expect sound, and if in a public setting can enlist the use of a common electronic device to cause less embarrassment. Eighty-six and a half years Allowing mute and/or volume control clearly within the user interface is a good idea. It won’t draw the user out of the experience, it’ll give more control to the user about what audio they want to hear (they may not want to turn down the volume of their entire device), and it’s less thought to reach for a very visible volume than to fumble with device settings. Indicating that sound is playing is also something to consider. Browsers do this by adding icons to tabs, but this isn’t always the first place to look for everyone. To The Future So let’s go! We see amazing demos built with Web Audio, and I’m sure, like me, they make you think, oh wow I wish I could do that / had thought of that / knew the first thing about audio to begin to even conceive that. But audio doesn’t actually need to be all bells and whistles (hey, it’s Christmas). Starting, stopping and adjusting simple panning and volume might be all you need to get started to introduce some good sound design in your web design. Isn’t it great then that there’s a tutorial just for that! Head on over to the MDN Web Audio API docs where the Using the Web Audio API article takes you through playing and pausing sounds, volume control and simple panning (moving the sound from left to right on stereo speakers). This year I believe we have all experienced the web as a shopping mall more than ever. It’s shining store fronts, flashing adverts, fast food, loud noises. Let’s use 2019 to create more forests to explore, oceans to dive and mountains to climb. 2018 Ruth John ruthjohn 2018-12-22T00:00:00+00:00 https://24ways.org/2018/how-to-use-audio-on-the-web/ design
308 How to Make a Chrome Extension to Delight (or Troll) Your Friends If you’re like me, you grew up drawing mustaches on celebrities. Every photograph was subject to your doodling wrath, and your brilliance was taken to a whole new level with computer programs like Microsoft Paint. The advent of digital cameras meant that no one was safe from your handiwork, especially not your friends. And when you finally got your hands on Photoshop, you spent hours maniacally giggling at your artistic genius. But today is different. You’re a serious adult with important things to do and a reputation to uphold. You keep up with modern web techniques and trends, and have little time for fun other than a random Giphy on Slack… right? Nope. If there’s one thing 2016 has taught me, it’s that we—the self-serious, world-changing tech movers and shakers of the universe—haven’t changed one bit from our younger, more delightable selves. How do I know? This year I created a Chrome extension called Tabby Cat and watched hundreds of thousands of people ditch productivity for randomly generated cats. Tabby Cat replaces your new tab page with an SVG cat featuring a silly name like “Stinky Dinosaur” or “Tiny Potato”. Over time, the cats collect goodies that vary in absurdity from fishbones to lawn flamingos to Raybans. Kids and adults alike use this extension, and analytics show the majority of use happens Monday through Friday from 9-5. The popularity of Tabby Cat has convinced me there’s still plenty of room in our big, grown-up hearts for fun. Today, we’re going to combine the formula behind Tabby Cat with your intrinsic desire to delight (or troll) your friends, and create a web app that generates your friends with random objects and environments of your choosing. You can publish it as a Chrome extension to replace your new tab, or simply host it as a website and point to it with the New Tab Redirect extension. Here’s a sneak peek at my final result featuring my partner, my cat, and I in cheerfully weird accessories. Your result will look however you want it to. Along the way, we’ll cover how to build a Chrome extension that replaces the new tab page, and explore ways to program randomness into your work to create something truly delightful. What you’ll need Adobe Illustrator (or a similar illustration program to export PNG) Some images of your friends A text editor Note: This can be as simple or as complex as you want it to be. Most of the application is pre-built so you can focus on kicking back and getting in touch with your creative side. If you want to dive in deeper, you’ll find ways to do it. Getting started Download a local copy of the boilerplate for today’s tutorial here, and open it in a text editor. Inside, you’ll find a simple web app that you can run in Chrome. Open index.html in Chrome. You should see a grey page that says “Noname”. Open template.pdf in Adobe Illustrator or a similar program that can export PNG. The file contains an artboard measuring 800px x 800px, with a dotted blue outline of a face. This is your template. Note: We’re using Google Chrome to build and preview this application because the end-result is a Chrome extension. This means that the application isn’t totally cross-browser compatible, but that’s okay. Step 1: Gather your friends The first thing to do is choose who your muses are. Since the holidays are upon us, I’d suggest finding inspiration in your family. Create your artwork For each person, find an image where their face is pointed as forward as possible. Place the image onto the Artwork layer of the Illustrator file, and line up their face with the template. Then, rename the artboard something descriptive like face_bob. Here’s my crew: As you can see, my use of the word “family” extends to cats. There’s no judgement here. Notice that some of my photos don’t completely fill the artboard–that’s fine. The images will be clipped into ovals when they’re rendered in the application. Now, export your images by following these steps: Turn the Template layer off and export the images as PNGs. In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your faces. Export at 72ppi to keep things running fast. Save your images into the images/ folder in your project. Add your images to config.js Open scripts/config.js. This is where you configure your extension. Add key value pairs to the faces object. The key should be the person’s name, and the value should be the filepath to the image. faces: { leslie: 'images/face_leslie.png', kyle: 'images/face_kyle.png', beep: 'images/face_beep.png' } The application will choose one of these options at random each time you open a new tab. This pattern is used for everything in the config file. You give the application groups of choices, and it chooses one at random each time it loads. The only thing that’s special about the faces object is that person’s name will also be displayed when their face is chosen. Now, when you refresh the project in Chrome, you should see one of your friends along with their name, like this: Congrats, you’re off and running! Step 2: Add adjectives Now that you’ve loaded your friends into the application, it’s time to call them names. This step definitely yields the most laughs for the least amount of effort. Add a list of adjectives into the prefixes array in config.js. To get the words flowing, I took inspiration from ways I might describe some of my relatives during a holiday gathering… prefixes: [ 'Loving', 'Drunk', 'Chatty', 'Merry', 'Creepy', 'Introspective', 'Cheerful', 'Awkward', 'Unrelatable', 'Hungry', ... ] When you refresh Chrome, you should see one of these words prefixed before your friend’s name. Voila! Step 3: Choose your color palette Real talk: I’m bad at choosing color palettes, so I have a trick up my sleeve that I want to share with you. If you’ve been blessed with the gift of color aptitude, skip ahead. How to choose colors To create a color palette, I start by going to a Coolors.co, and I hit the spacebar until I find a palette that I like. We need a wide gamut of hues for our palette, so lock down colors you like and keep hitting the spacebar until you find a nice, full range. You can use as many or as few colors as you like. Copy these colors into your swatches in Adobe Illustrator. They’ll be the base for any illustrations you create later. Now you need a set of background colors. Here’s my trick to making these consistent with your illustration palette without completely blending in. Use the “Adjust Palette” tool in Coolors to dial up the brightness a few notches, and the saturation down just a tad to remove any neon effect. These will be your background colors. Add your background colors to config.js Copy your hex codes into the bgColors array in config.js. bgColors: [ '#FFDD77', '#FF8E72', '#ED5E84', '#4CE0B3', '#9893DA', ... ] Now when you go back to Chrome and refresh the page, you’ll see your new palette! Step 4: Accessorize This is the fun part. We’re going to illustrate objects, accessories, lizards—whatever you want—and layer them on top of your friends. Your objects will be categorized into groups, and one option from each group will be randomly chosen each time you load the page. Think of a group like “hats” or “glasses”. This will allow combinations of accessories to show at once, without showing two of the same type on the same person. Create a group of accessories To get started, open up Illustrator and create a new artboard out of the template. Think of a group of objects that you can riff on. I found hats to be a good place to start. If you don’t feel like illustrating, you can use cut-out images instead. Next, follow the same steps as you did when you exported the faces. Here they are again: Turn the Template layer off and export the images as PNGs. In the Export dialog, tick the “Use Artboards” checkbox and enter the range with your hats. Export at 72ppi to keep things running fast. Save your images into the images/ folder in your project. Add your accessories to config.js In config.js, add a new key to the customProps object that describes the group of accessories that you just created. Its value should be an array of the filepaths to your images. This is my hats array: customProps: { hats: [ 'images/hat_crown.png', 'images/hat_santa.png', 'images/hat_tophat.png', 'images/hat_antlers.png' ] } Refresh Chrome and behold, accessories! Create as many more accessories as you want Repeat the steps above to create as many groups of accessories as you want. I went on to make glasses and hairstyles, so my final illustrator file looks like this: The last step is adding your new groups to the config object. List your groups in the order that you want them to be stacked in the DOM. My final output will be hair, then hats, then glasses: customProps: { hair: [ 'images/hair_bowl.png', 'images/hair_bob.png' ], hats: [ 'images/hat_crown.png', 'images/hat_santa.png', 'images/hat_tophat.png', 'images/hat_antlers.png' ], glasses: [ 'images/glasses_aviators.png', 'images/glasses_monacle.png' ] } And, there you have it! Randomly generated friends with random accessories. Feel free to go much crazier than I did. I considered adding a whole group of animals in celebration of the new season of Planet Earth, or even adding Sir David Attenborough himself, or doing a bit of role reversal and featuring the animals with little safari hats! But I digress… Step 5: Publish it It’s time to put this in your new tabs! You have two options: Publish it as a Chrome extension in the Chrome Web Store. Host it as a website and point to it with the New Tab Redirect extension. Today, we’re going to cover Option #1 because I want to show you how to make the simplest Chrome extension possible. However, I recommend Option #2 if you want to keep your project private. Every Chrome extension that you publish is made publicly available, so unless your friends want their faces published to an extension that anyone can use, I’d suggest sticking to Option #2. How to make a simple Chrome extension to replace the new tab page All you need to do to make your project into a Chrome extension is add a manifest.json file to the root of your project with the following contents. There are plenty of other properties that you can add to your manifest file, but these are the only ones that are required for a new tab replacement: { "manifest_version": 2, "name": "Your extension name", "version": "1.0", "chrome_url_overrides" : { "newtab": "index.html" } } To test your extension, you’ll need to run it in Developer Mode. Here’s how to do that: Go to the Extensions page in Chrome by navigating to chrome://extensions/. Tick the checkbox in the upper-right corner labelled “Developer Mode”. Click “Load unpacked extension…” and select this project. If everything is running smoothly, you should see your project when you open a new tab. If there are any errors, they should appear in a yellow box on the Extensions page. Voila! Like I said, this is a very light example of a Chrome extension, but Google has tons of great documentation on how to take things further. Check it out and see what inspires you. Share the love Now that you know how to make a new tab extension, go forth and create! But wield your power responsibly. New tabs are opened so often that they’ve become a part of everyday life–just consider how many tabs you opened today. Some people prefer to-do lists in their tabs, and others prefer cats. At the end of the day, let’s make something that makes us happy. Cheers! 2016 Leslie Zacharkow lesliezacharkow 2016-12-08T00:00:00+00:00 https://24ways.org/2016/how-to-make-a-chrome-extension/ code
73 How to Make Your Site Look Half-Decent in Half an Hour Programmers like me are often intimidated by design – but a little effort can give a huge return on investment. Here are one coder’s tips for making any site quickly look more professional. I am a programmer. I am not a designer. I have a degree in computer science, and I don’t mind Comic Sans. (It looks cheerful. Move on.) But although I am a programmer, I want to make my sites look attractive. This is partly out of vanity, and partly realism. Vanity because I want people to think my work is good, and realism because the research shows that people won’t think a site is credible unless it also looks attractive. For a very long time after I became a programmer, I was scared of design. Design seemed to consist of complicated rules that weren’t written down anywhere, plus an unlearnable sense of taste, possessed only by a black-clad elite. But a little while ago, I decided to do my best to hack what it took to make my own projects look vaguely attractive. And although this doesn’t come close to the effect a professional designer could achieve, gathering these resources for improving a site’s look and feel has been really helpful. If I hadn’t figured out some basic design shortcuts, it’s unlikely that a weekend hack of mine would have ended up on page three of the Daily Mail. And too often now, I see excellent programming projects that don’t reach the audience they deserve, simply because their design doesn’t match their execution. So, if you are a developer, my Christmas present to you is this: my own collection of hacks that, used rightly, can make your personal programming projects look professional, quickly. None are hard to learn, most are free, and they let you focus on writing code. One thing to note about these tips, though. They are a personal, pragmatic compilation. They are suggestions, not a definitive guide. You will definitely get better results by working with a professional designer, and by studying design more deeply. If you are a designer, I would love to hear your suggestions for the best tools that I’ve missed, and I’d love to know how we programmers can do things better. With that, on to the tools… 1. Use Bootstrap If you’re not already using Bootstrap, start now. I really think that Bootstrap is one of the most significant technical achievements of the last few years: it democratizes the whole process of web design. Essentially, Bootstrap is a a grid system, with a bunch of common elements. So you can lay your site out how you want, drop in simple elements like forms and tables, and get a good-looking, consistent result, without spending hours fiddling with CSS. You just need HTML. Another massive upside is that it makes it easy to make any site responsive, so you don’t have to worry about writing media queries. Go, get Bootstrap and check out the examples. To keep your site lightweight, you can customize your download to include only the elements you want. If you have more time, then Mark Otto’s article on why and how Bootstrap was created at Twitter is well worth a read. 2. Pimp Bootstrap Using Bootstrap is already a significant advance on not using Bootstrap, and massively reduces the tedium of front-end development. But you also run the risk of creating Yet Another Bootstrap Site, or Hack Day Design, as it’s known. If you’re really pressed for time, you could buy a theme from Wrap Bootstrap. These are usually created by professional designers, and will give a polish that we can’t achieve ourselves. Your site won’t be unique, but it will look good quickly. Luckily, it’s pretty easy to make Bootstrap not look too much like Bootstrap – using fonts, CSS effects, background images, colour schemes and so on. Most of the rest of this article covers different ways to achieve this. We are going to customize this Bootstrap example page. This already has some custom CSS in the <head>. We’ll pull it all out, and create a new CSS file, custom.css. Then we add a reference to it in the header. Now we’re ready to start customizing things. 3. Fonts Web fonts are one of the quickest ways to make your site look distinctive, modern, and less Bootstrappy, so we’ll start there. First, we can add some sweet fonts, from Google Web Fonts. The intimidating bit is choosing fonts that look nice together. Luckily, there are plenty of suggestions around the web: we’re going to use one of DesignShack’s suggested free Google Fonts pairings. Our fonts are Corben (for headings) and Nobile (for body copy). Then we add these files to our <head>. <link href="http://fonts.googleapis.com/css?family=Corben:bold" rel="stylesheet" type="text/css"> <link href="http://fonts.googleapis.com/css?family=Nobile" rel="stylesheet" type="text/css"> …and this to custom.css: h1, h2, h3, h4, h5, h6 { font-family: 'Corben', Georgia, Times, serif; } p, div { font-family: 'Nobile', Helvetica, Arial, sans-serif; } Now our example looks like this. It’s not going to win any design awards, but it’s immediately better: I also recommend the web font services Fontdeck, or Typekit – these have a wider selection of fonts, and are worth the investment if you regularly need to make sites look good. For more font combinations, Just My Type suggests appealing pairings from Typekit. Finally, you can experiment with type pairing ideas at Type Connection. For the design background on pairing fonts, Typekit’s post is worth a read. 4. Textures An instant way to make a site look classy is to use textures. You know the grey, stripy, indefinably elegant background on 24ways.org? That. If only there was a superb resource listing attractive, free, ready-to-use textures… Oh wait, there’s Atle Mo’s Subtle Patterns. We’re going to use Cream Dust, for an effect that can only be described as subtle. We download the file to a new /img/ directory, then add this to the CSS file: body { background: url(/img/cream_dust.png) repeat 0 0; } Bang: For some design background on patterns, I recommend reading through Smashing Magazine’s guidelines on textures. (TL;DR version: use textures to enhance beauty, and clarify the information architecture of your site; but don’t overdo it, or inadvertently obscure your text.) Still more to do, though. Onwards. 5. Icons Last year’s 24 ways taught us to use icon fonts for our site icons. This is great for the time-pressed coder, because icon fonts don’t just cut down on HTTP requests – they’re a lot quicker to set up than image-based icons, too. Bootstrap ships with an extensive, free for commercial use icon set in the shape of Font Awesome. Its icons are safe for screen readers, and can even be made to work in IE7 if needed (we’re not going to bother here). To start using these icons, just download Font Awesome, and add the /fonts/ directory to your site and the font-awesome.css file into your /css/ directory. Then add a reference to the CSS file in your header: <link rel="stylesheet" href="/css/font-awesome.css"> Finally, we’ll add a truck icon to the main action button, as follows. Why a truck? Why not? <a class="btn btn-large btn-success" href="#"><i class="icon-truck"></i> Sign up today</a> We’ll also tweak our CSS file to stop the icon nudging up against the button text: .jumbotron .btn i { margin-right: 8px; } And this is how it looks: Not the most exciting change ever, but it livens up the page a bit. The licence is CC-BY-3.0, so we also include a mention of Font Awesome and its URL in the source code. If you’d like something a little more distinctive, Shifticons lets you pay a few cents for individual icons, with the bonus that you only have to serve the icons you actually use, which is more efficient. Its icons are also friendly to screen readers, but won’t work in IE7. 6. CSS3 The next thing you could do is add some CSS3 goodness. It can really help the key elements of the site stand out. If you are pressed for time, just adding box-shadow and text-shadow to emphasize headings and standouts can be useful: h1 { text-shadow: 1px 1px 1px #ccc; } .div-that-you want-to-stand-out { box-shadow: 0 0 1em 1em #ccc; } We have a little more time though, so we’re going to do something more subtle. We’ll add a radial gradient behind the main heading, using an online gradient editor. The output is hefty, but you can see it in the CSS. Note that we also need to add the following to our HTML, for IE9 support: <!--[if gte IE 9]> <style type="text/css"> .gradient { filter: none; } </style> <![endif]--> And the effect – I don’t know what a designer would think, but I like the way it makes the heading pop. For a crash course in useful modern CSS effects, I highly recommend CodeSchool’s online course in Functional HTM5 and CSS3. It costs money ($25 a month to subscribe), but it’s worth it for the time you’ll save. As a bonus, you also get access to some excellent JavaScript, Ruby and GitHub courses. (Incidentally, if you find yourself fighting with basic float and display attributes in CSS – and there’s no shame in it, CSS layout is not intuitive – I recommend the CSS Cross-Country course at CodeSchool.) 7. Add a twist We could leave it there, but we’re going to add a background image, and give the site some personality. This is the area of design that I think many programmers find most intimidating. How do we create the graphics and photographs that a designer would use? The answer is iStockPhoto and its competitors – online image libraries where you can find and pay for images. They won’t be unique, but for our purposes, that’s fine. We’re going to use a Christmassy image. For a twist, we’re going to use Backstretch to make it responsive. We must pay for the image, then download it to our /img/ directory. Then, we set it as our <body>’s background-image, by including a JavaScript file with just the following line: $.backstretch("/img/winter.jpg"); We also reset the subtle pattern to become the background for our container image. It would look much better transparent, so we can use this technique in GIMP to make it see-through: .container-narrow { background: url(/img/cream_dust_transparent.png) repeat 0 0; } We also fiddle with the padding on body and .container-narrow a bit, and this is the result: (Aside: If this were a real site, I’d want to buy images in multiple sizes and ensure that Backstretch chose the most appropriately sized image for our screen, perhaps using responsive images.) How to find the effects that make a site interesting? I keep a set of bookmarks for interesting JavaScript and CSS effects I might want to use someday, from realistic shadows to animating grids. The JavaScript Weekly newsletter is a great source of ideas. 8. Colour schemes We’re just about getting there – though we’re probably past half an hour now – but that button and that menu still both look awfully Bootstrappy. Real sites, with real designers, have a colour palette, carefully chosen to harmonize and match the brand profile. For our purposes, we’re just going to borrow some colours from the image. We use Gimp’s colour picker tool to identify the hex values of the blue of the snow. Then we can use Color Scheme Designer to find contrasting, but complementary, colours. Finally, we use those colours for our central button. There are lots of tools to help us do this, such as Bootstrap Buttons. The new HTML is quite long, so I won’t paste it all here, but you can find it in the CSS file. We also reset the colour of the pills in the navigation menu, which is a bit easier: .nav-pills > .active > a, .nav-pills > .active > a:hover { background-color: #FF9473; } I’m not sure if the result is great to be honest, but at least we’ve lost those Bootstrap-blue buttons: Another way to do it, if you didn’t have an image to match, would be to borrow an attractive colour scheme. Colourlovers is a community where people create and share ready-made colour palettes. The key thing is to find a palette with an open licence, so you can legitimately use it. Unfortunately, you can’t search for palettes by licence type, but many do have open licences. Here’s a popular palette with a CC-BY-SA licence that allows reuse with attribution. As above, you can use the hex values from the palette in your custom CSS, and bask in the newly colourful results. 9. Read on With the above techniques, you can make a site that is starting to look slightly more professional, pretty quickly. If you have the time to invest, it’s well worth learning some design principles, if only so that design seems less intimidating and more like fun. As part of my design learning, I read a few introductory design books aimed at coders. The best I found was David Kadavy’s Design for Hackers: Reverse-Engineering Beauty, which explains the basic principles behind choosing colours, fonts, typefaces and layout. In the introduction to his book, David writes: No group stands to gain more from design literacy than hackers do… The one subject that is exceedingly frustrating for hackers to try to learn is design. Hackers know that in order to compete against corporate behemoths with just a few lines of code, they need to have good, clear design, but the resources with which to learn design are simply hard to find. Well said. If you have half a day to invest, rather than half an hour, I recommend getting hold of David’s book. And the journey is over. Perhaps that took slightly more than half an hour, but with practice, using the above techniques can become second nature. What useful tools have I missed? Designers, how would you improve on the above? I would love to know, so please give me your views in the comments. 2012 Anna Powell-Smith annapowellsmith 2012-12-16T00:00:00+00:00 https://24ways.org/2012/how-to-make-your-site-look-half-decent/ design
69 How to Do a UX Review A UX review is where an expert goes through a website looking for usability and experience problems and makes recommendations on how to fix them. I’ve completed a number of UX reviews over my twelve years working as a user experience consultant and I thought I’d share my approach. I’ll be talking about reviewing websites here; you can adapt the approach for web apps, or mobile or desktop apps. Why conduct a review Typically, a client asks for a review to be undertaken by a trusted and, ideally, detached third party who either works for an agency or is a freelancer. Often they may ask a new member of the UX team to complete one, or even set it as a task for a job interview. This indicates the client is looking for an objective view, seen from the outside as a user would see the website. I always suggest conducting some user research rather than a review. Users know their goals and watching them make (what you might think of as) mistakes on the website is invaluable. Conducting research with six users can give you six hours’ worth of review material from six viewpoints. In short, user research can identify more problems and show how common those problems might be. There are three reasons, though, why a review might better suit client needs than user research: Quick results: user research and analysis takes at least three weeks. Limited budget: the £6–10,000 cost to run user research is about twice the cost of a UX review. Users are hard to reach: in the business-to-business world, reaching users is difficult, especially if your users hold senior positions in their organisations. Working with consumers is much easier as there are often more of them. There is some debate about the benefits of user research over UX review. In my experience you learn far more from research, but opinions differ. Be objective The number one mistake many UX reviewers make is reporting back the issues they identify as their opinion. This can cause credibility problems because you have to keep justifying why your opinion is correct. I’ve had the most success when giving bad news in a UX review and then finally getting things fixed when I have been as objective as possible, offering evidence for why something may be a problem. To be objective we need two sources of data: numbers from analytics to appeal to reason; and stories from users in the form of personas to speak to emotions. Highlighting issues with dispassionate numerical data helps show the extent of the problem. Making the problems more human using personas can make the problem feel more real. Numbers from analytics The majority of clients I work with use Google Analytics, but if you use a different analytics package the same concepts apply. I use analytics to find two sets of things. 1. Landing pages and search terms Landing pages are the pages users see first when they visit a website – more often than not via a Google search. Landing pages reveal user goals. If a user landed on a page called ‘Yellow shoes’ their goal may well be to find out about or buy some yellow shoes. It would be great to see all the search terms bringing people to the website but in 2011 Google stopped providing search term data to (rightly!) protect users’ privacy. You can get some search term data from Google Webmaster tools, but we must rely on landing pages as a clue to our users’ goals. The thing to look for is high-traffic landing pages with a high bounce rate. Bounce rate is the percentage of visitors to a website who navigate away from the site after viewing only one page. A high bounce rate (over 50%) isn’t good; above 70% is bad. To get a list of high-traffic landing pages with a high bounce rate install this bespoke report. Google Analytics showing landing pages ordered by popularity and the bounce rate for each. This is the list of pages with high demand and that have real problems as the bounce rate is high. This is the main focus of the UX review. 2. User flows We have the beginnings of the user journey: search terms and initial landing pages. Now we can tap into the really useful bit of Google Analytics. Called behaviour flows, they show the most common order of pages visited. Behaviour flows from Google Analytics, showing the routes users took through the website. Here we can see the second and third (and so on) pages users visited. Importantly, we can also see the drop-outs at each step. If your client has it set up, you can also set goal pages (for example, a post-checkout contact us and thank you page). You can then see a similar view that tracks back from the goal pages. If your client doesn’t have this, suggest they set up goal tracking. It’s easy to do. We now have the remainder of the user journey. A user journey Expect the work in analytics to take up to a day. We may well identify more than one user journey, starting from different landing pages and going to different second- and third-level pages. That’s a good thing and shows we have different user types. Talking of user types, we need to define who our users are. Personas We have some user journeys and now we need to understand more about our users’ motivations and goals. I have a love-hate relationship with personas, but used properly these portraits of users can help bring a human touch to our UX review. I suggest using a very cut-down view of a persona. My old friends Steve Cable and Richard Caddick at cxpartners have a great free template for personas from their book Communicating the User Experience. The first thing to do is find a picture that represents that persona. Don’t use crappy stock photography – it’s sometimes hard to relate to perfect-looking people) – use authentic-looking people. Here’s a good collection of persona photos. An example persona. The personas have three basic attributes: Goals: we can complete these drawing on the analytics data we have (see example). Musts: things we have to do to meet the persona’s needs. Must nots: a list of things we really shouldn’t do. Completing points 2 and 3 can often be done during the writing of the report. Let’s take an example. We know that the search term ‘yellow shoes’ takes the user to the landing page for yellow shoes. We also know this page has a high bounce rate, meaning it doesn’t provide a good experience. With our expert hat on we can review the page. We will find two types of problem: Usability issues: ineffective button placement or incorrect wording, links not looking like links, and so on. Experience issues: for example, if a product is out of stock we have to contact the business to ask them to restock. That link is very small and hard to see. We could identify that the contact button isn’t easy to find (a usability issue) but that’s not the real problem here. That the user has to ask the business to restock the item is a bad user experience. We add this to our personas’ must nots. The big experience problems with the site form the musts and must nots for our personas. We now have a story around our user journey that highlights what is going wrong. If we’ve identified a number of user journeys, multiple landing pages and differing second and third pages visited, we can create more personas to match. A good rule of thumb is no more than three personas. Any more and they lose impact, watering down your results. Expect persona creation to take up to a day to complete. Let’s start the review We take the user journeys and we follow them step by step, working through the website looking for the reasons why users drop out at each step. Using Keynote or PowerPoint, I structure the final report around the user journey with separate sections for each step. For each step we’ll find both usability and experience problems. Split the results into those two groups. Usability problems are fairly easy to fix as they’re often quick design changes. As you go along, note the usability problems in one place: we’ll call this ‘quick wins’. Simple quick fixes are a reassuring thing for a client to see and mean they can get started on stuff right away. You can mark the severity of usability issues. Use a scale from 1 to 3 (if you use 1 to 5 everything ends up being a 3!) where 1 is minor and 3 is serious. Review the website on the device you’d expect your persona to use. Are they using the site on a smartphone? Review it on a smartphone. I allow one page or slide per problem, which allows me to explain what is going wrong. For a usability problem I’ll often make a quick wireframe or sketch to explain how to address it. A UX review slide displaying all the elements to be addressed. These slides may be viewed from across the room on a screen so zoom into areas of discussion. (Quick tip: if you use Google Chrome, try Awesome Screenshot to capture screens.) When it comes to the more severe experience problems – things like an online shop not offering next day delivery, or a business that needs to promise to get back to new customers within a few hours – these will take more than a tweak to the UI to fix. Call these something different. I use the terms like business challenges and customer experience issues as they show that it will take changes to the organisation and its processes to address them. It’s often beyond the remit of a humble UX consultant to recommend how to fix organisational issues, so don’t try. Again, create a page within your document to collect all of the business challenges together. Expect the review to take between one and three days to complete. The final report should follow this structure: The approach Overview of usability quick wins Overview of experience issues Overview of Google Analytics findings The user journeys The personas Detailed page-by-page review (broken down by steps on the user journey) There are two academic theories to help with the review. Heuristic evaluation is a set of criteria to organise the issues you find. They’re great for categorising the usability issues you identify but in practice they can be quite cumbersome to apply. I prefer the more scientific and much simpler cognitive walkthrough that is focused on goals and actions. A workshop to go through the findings The most important part of the UX review process is to talk through the issues with your client and their team. A document can only communicate a certain amount. Conversations about the findings will help the team understand the severity of the issues you’ve uncovered and give them a chance to discuss what to do about them. Expect the workshop to last around three hours. When presenting the report, explain the method you used to conduct the review, the data sources, personas and the reasoning behind the issues you found. Start by going through the usability issues. Often these won’t be contentious and you can build trust and improve your credibility by making simple, easy to implement changes. The most valuable part of the workshop is conversation around each issue, especially the experience problems. The workshop should include time to talk through each experience issue and how the team will address it. I collect actions on index cards throughout the workshop and make a note of who will take what action with each problem. Index cards showing the problem and who is responsible. When talking through the issues, the person who designed the site is probably in the room – they may well feel threatened. So be nice. When I talk through the report I try to have strong ideas, weakly held. At the end of the workshop you’ll have talked through each of the issues and identified who is responsible for addressing them. To close the workshop I hand out the cards to the relevant people, giving them a physical reminder of the next steps they have to take. That’s my process for conducting a review. I’d love to hear any tips you have in the comments. 2015 Joe Leech joeleech 2015-12-03T00:00:00+00:00 https://24ways.org/2015/how-to-do-a-ux-review/ ux
114 How To Create Rockband'ism There are mysteries happening in the world of business these days. We want something else by now. The business of business has to become more than business. We want to be able to identify ourselves with the brands we purchase and we want them to do good things. We want to feel cool because we buy stuff, and we don’t just want a shopping experience – we want an engagement with a company we can relate to. Let me get back to “feeling cool” – if we want to feel cool, we might get the companies we buy from to support that. That’s why I am on a mission to make companies into rockbands. Now when I say rockbands – I don’t mean the puke-y, drunky, nasty stuff that some people would highlight is also a part of rockbands. Therefore I have created my own word “rockband’ism”. This word is the definition of a childhood dream version of being in a rockband – the feeling of being more respected and loved and cool, than a cockroach or a suit on the floor of a company. Rockband’ism Rockband’ism is what we aspire to, to feel cool and happy. So basically what I am arguing is that companies should look upon themselves as rockbands. Because the world has changed, so business needs to change as well. I have listed a couple of things you could do today to become a rockband, as a person or as a company. 1 – Give your support to companies that make a difference to their surroundings – if you are buying electronics look up what the electronic producers are doing of good in the world (check out the Greenpeace Guide to Greener Electronics). 2 – Implement good karma in your everyday life (and do well by doing good). What you give out you get back at some point in some shape – this can also be implemented for business. 3 – WWRD? – “what would a rockband do”? or if you are into Kenny Rogers – what would he do in any given situation? This will also show yourself where your business or personal integrity lies because you actually act as a person or a rockband you admire. 4 – Start leading instead of managing – If we can measure stuff why should we manage it? Leadership is key here instead of management. When you lead you tell people how to reach the stars, when you manage you keep them on the ground. 5 – Respect and confide in, that people are the best at what they do. If they aren’t, they won’t be around for long. If they are and you keep on buggin’ them, they won’t be around for long either. 6 – Don’t be arrogant – Because audiences can’t stand it – talk to people as a person not as a company. 7 – Focus on your return on involvement – know that you get a return on, what you involve yourself in. No matter if it’s bingo, communities, talks, ornithology or un-conferences. 8 – Find out where you can make a difference and do it. Don’t leave it up to everybody else to save the world. 9 – Find out what you can do to become an authentic, trustworthy and remarkable company. Maybe you could even think about this a lot and make these thoughts into an actionplan. 10 – Last but not least – if you’re not happy – do something else, become another type of rockband, maybe a soloist of a sort, or an orchestra. No more business as usual This really isn’t time for more business as usual, our environment (digital, natural, work or any other kind of environment) is changing. You are going to have to change too. This article actually sprang from a talk I did at the Shift08 conference in Lisbon in October. In addition to this article for 24 ways I have turned the talk into an eBook that you can get on Toothless Tiger Press for free. May you all have a sustainable and great Christmas full of great moments with your loved ones. December is a month for gratitude, enjoyment and love. 2008 Henriette Weber henrietteweber 2008-12-07T00:00:00+00:00 https://24ways.org/2008/how-to-create-rockbandism/ business
55 How Tabs Should Work Tabs in browsers (not browser tabs) are one of the oldest custom UI elements in a browser that I can think of. They’ve been done to death. But, sadly, most of the time I come across them, the tabs have been badly, or rather partially, implemented. So this post is my definition of how a tabbing system should work, and one approach of implementing that. But… tabs are easy, right? I’ve been writing code for tabbing systems in JavaScript for coming up on a decade, and at one point I was pretty proud of how small I could make the JavaScript for the tabbing system: var tabs = $('.tab').click(function () { tabs.hide().filter(this.hash).show(); }).map(function () { return $(this.hash)[0]; }); $('.tab:first').click(); Simple, right? Nearly fits in a tweet (ignoring the whole jQuery library…). Still, it’s riddled with problems that make it a far from perfect solution. Requirements: what makes the perfect tab? All content is navigable and available without JavaScript (crawler-compatible and low JS-compatible). ARIA roles. The tabs are anchor links that: are clickable have block layout have their href pointing to the id of the panel element use the correct cursor (i.e. cursor: pointer). Since tabs are clickable, the user can open in a new tab/window and the page correctly loads with the correct tab open. Right-clicking (and Shift-clicking) doesn’t cause the tab to be selected. Native browser Back/Forward button correctly changes the state of the selected tab (think about it working exactly as if there were no JavaScript in place). The first three points are all to do with the semantics of the markup and how the markup has been styled. I think it’s easy to do a good job by thinking of tabs as links, and not as some part of an application. Links are navigable, and they should work the same way other links on the page work. The last three points are JavaScript problems. Let’s investigate that. The shitmus test Like a litmus test, here’s a couple of quick ways you can tell if a tabbing system is poorly implemented: Change tab, then use the Back button (or keyboard shortcut) and it breaks The tab isn’t a link, so you can’t open it in a new tab These two basic things are, to me, the bare minimum that a tabbing system should have. Why is this important? The people who push their so-called native apps on users can’t have more reasons why the web sucks. If something as basic as a tab doesn’t work, obviously there’s more ammo to push a closed native app or platform on your users. If you’re going to be a web developer, one of your responsibilities is to maintain established interactivity paradigms. This doesn’t mean don’t innovate. But it does mean: stop fucking up my scrolling experience with your poorly executed scroll effects. </rant> :breath: URI fragment, absolute URL or query string? A URI fragment (AKA the # hash bit) would be using mysite.com/config#content to show the content panel. A fully addressable URL would be mysite.com/config/content. Using a query string (by way of filtering the page): mysite.com/config?tab=content. This decision really depends on the context of your tabbing system. For something like GitHub’s tabs to view a pull request, it makes sense that the full URL changes. For our problem though, I want to solve the issue when the page doesn’t do a full URL update; that is, your regular run-of-the-mill tabbing system. I used to be from the school of using the hash to show the correct tab, but I’ve recently been exploring whether the query string can be used. The biggest reason is that multiple hashes don’t work, and comma-separated hash fragments don’t make any sense to control multiple tabs (since it doesn’t actually link to anything). For this article, I’ll keep focused on using a single tabbing system and a hash on the URL to control the tabs. Markup I’m going to assume subcontent, so my markup would look like this (yes, this is a cat demo…): <ul class="tabs"> <li><a class="tab" href="#dizzy">Dizzy</a></li> <li><a class="tab" href="#ninja">Ninja</a></li> <li><a class="tab" href="#missy">Missy</a></li> </ul> <div id="dizzy"> <!-- panel content --> </div> <div id="ninja"> <!-- panel content --> </div> <div id="missy"> <!-- panel content --> </div> It’s important to note that in the markup the link used for an individual tab references its panel content using the hash, pointing to the id on the panel. This will allow our content to connect up without JavaScript and give us a bunch of features for free, which we’ll see once we’re on to writing the code. URL-driven tabbing systems Instead of making the code responsive to the user’s input, we’re going to exclusively use the browser URL and the hashchange event on the window to drive this tabbing system. This way we get Back button support for free. With that in mind, let’s start building up our code. I’ll assume we have the jQuery library, but I’ve also provided the full code working without a library (vanilla, if you will), but it depends on relatively new (polyfillable) tech like classList and dataset (which generally have IE10 and all other browser support). Note that I’ll start with the simplest solution, and I’ll refactor the code as I go along, like in places where I keep calling jQuery selectors. function show(id) { // remove the selected class from the tabs, // and add it back to the one the user selected $('.tab').removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it $('.panel').hide().filter(id).show(); } $(window).on('hashchange', function () { show(location.hash); }); // initialise by showing the first panel show('#dizzy'); This works pretty well for such little code. Notice that we don’t have any click handlers for the user and the Back button works right out of the box. However, there’s a number of problems we need to fix: The initialised tab is hard-coded to the first panel, rather than what’s on the URL. If there’s no hash on the URL, all the panels are hidden (and thus broken). If you scroll to the bottom of the example, you’ll find a “top” link; clicking that will break our tabbing system. I’ve purposely made the page long, so that when you click on a tab, you’ll see the page scrolls to the top of the tab. Not a huge deal, but a bit annoying. From our criteria at the start of this post, we’ve already solved items 4 and 5. Not a terrible start. Let’s solve items 1 through 3 next. Using the URL to initialise correctly and protect from breakage Instead of arbitrarily picking the first panel from our collection, the code should read the current location.hash and use that if it’s available. The problem is: what if the hash on the URL isn’t actually for a tab? The solution here is that we need to cache a list of known panel IDs. In fact, well-written DOM scripting won’t continuously search the DOM for nodes. That is, when the show function kept calling $('.tab').each(...) it was wasteful. The result of $('.tab') should be cached. So now the code will collect all the tabs, then find the related panels from those tabs, and we’ll use that list to double the values we give the show function (during initialisation, for instance). // collect all the tabs var tabs = $('.tab'); // get an array of the panel ids (from the anchor hash) var targets = tabs.map(function () { return this.hash; }).get(); // use those ids to get a jQuery collection of panels var panels = $(targets.join(',')); function show(id) { // if no value was given, let's take the first panel if (!id) { id = targets[0]; } // remove the selected class from the tabs, // and add it back to the one the user selected tabs.removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it panels.hide().filter(id).show(); } $(window).on('hashchange', function () { var hash = location.hash; if (targets.indexOf(hash) !== -1) { show(hash); } }); // initialise show(targets.indexOf(location.hash) !== -1 ? location.hash : ''); The core of working out which tab to initialise with is solved in that last line: is there a location.hash? Is it in our list of valid targets (panels)? If so, select that tab. The second breakage we saw in the original demo was that clicking the “top” link would break our tabs. This was due to the hashchange event firing and the code didn’t validate the hash that was passed. Now this happens, the panels don’t break. So far we’ve got a tabbing system that: Works without JavaScript. Supports right-click and Shift-click (and doesn’t select in these cases). Loads the correct panel if you start with a hash. Supports native browser navigation. Supports the keyboard. The only annoying problem we have now is that the page jumps when a tab is selected. That’s due to the browser following the default behaviour of an internal link on the page. To solve this, things are going to get a little hairy, but it’s all for a good cause. Removing the jump to tab You’d be forgiven for thinking you just need to hook a click handler and return false. It’s what I started with. Only that’s not the solution. If we add the click handler, it breaks all the right-click and Shift-click support. There may be another way to solve this, but what follows is the way I found – and it works. It’s just a bit… hairy, as I said. We’re going to strip the id attribute off the target panel when the user tries to navigate to it, and then put it back on once the show code starts to run. This change will mean the browser has nowhere to navigate to for that moment, and won’t jump the page. The change involves the following: Add a click handle that removes the id from the target panel, and cache this in a target variable that we’ll use later in hashchange (see point 4). In the same click handler, set the location.hash to the current link’s hash. This is important because it forces a hashchange event regardless of whether the URL actually changed, which prevents the tabs breaking (try it yourself by removing this line). For each panel, put a backup copy of the id attribute in a data property (I’ve called it old-id). When the hashchange event fires, if we have a target value, let’s put the id back on the panel. These changes result in this final code: /*global $*/ // a temp value to cache *what* we're about to show var target = null; // collect all the tabs var tabs = $('.tab').on('click', function () { target = $(this.hash).removeAttr('id'); // if the URL isn't going to change, then hashchange // event doesn't fire, so we trigger the update manually if (location.hash === this.hash) { // but this has to happen after the DOM update has // completed, so we wrap it in a setTimeout 0 setTimeout(update, 0); } }); // get an array of the panel ids (from the anchor hash) var targets = tabs.map(function () { return this.hash; }).get(); // use those ids to get a jQuery collection of panels var panels = $(targets.join(',')).each(function () { // keep a copy of what the original el.id was $(this).data('old-id', this.id); }); function update() { if (target) { target.attr('id', target.data('old-id')); target = null; } var hash = window.location.hash; if (targets.indexOf(hash) !== -1) { show(hash); } } function show(id) { // if no value was given, let's take the first panel if (!id) { id = targets[0]; } // remove the selected class from the tabs, // and add it back to the one the user selected tabs.removeClass('selected').filter(function () { return (this.hash === id); }).addClass('selected'); // now hide all the panels, then filter to // the one we're interested in, and show it panels.hide().filter(id).show(); } $(window).on('hashchange', update); // initialise if (targets.indexOf(window.location.hash) !== -1) { update(); } else { show(); } This version now meets all the criteria I mentioned in my original list, except for the ARIA roles and accessibility. Getting this support is actually very cheap to add. ARIA roles This article on ARIA tabs made it very easy to get the tabbing system working as I wanted. The tasks were simple: Add aria-role set to tab for the tabs, and tabpanel for the panels. Set aria-controls on the tabs to point to their related panel (by id). I use JavaScript to add tabindex=0 to all the tab elements. When I add the selected class to the tab, I also set aria-selected to true and, inversely, when I remove the selected class I set aria-selected to false. When I hide the panels I add aria-hidden=true, and when I show the specific panel I set aria-hidden=false. And that’s it. Very small changes to get full sign-off that the tabbing system is bulletproof and accessible. Check out the final version (and the non-jQuery version as promised). In conclusion There’s a lot of tab implementations out there, but there’s an equal amount that break the browsing paradigm and the simple linkability of content. Clearly there’s a special hell for those tab systems that don’t even use links, but I think it’s clear that even in something that’s relatively simple, it’s the small details that make or break the user experience. Obviously there are corners I’ve not explored, like when there’s more than one set of tabs on a page, and equally whether you should deliver the initial markup with the correct tab selected. I think the answer lies in using query strings in combination with hashes on the URL, but maybe that’s for another year! 2015 Remy Sharp remysharp 2015-12-22T00:00:00+00:00 https://24ways.org/2015/how-tabs-should-work/ code
159 How Media Studies Can Massage Your Message A young web designer once told his teacher ‘just get to the meat already.’ He was frustrated with what he called the ‘history lessons and name-dropping’ aspects of his formal college course. He just wanted to learn how to build Web sites, not examine the reasons why. Technique and theory are both integrated and necessary portions of a strong education. The student’s perspective has direct value, but also holds a distinct sorrow: Knowing the how without the why creates a serious problem. Without these surrounding insights we cannot tap into the influence of the history and evolved knowledge that came before. We cannot properly analyze, criticize, evaluate and innovate beyond the scope of technique. History holds the key to transcending former mistakes. Philosophy encourages us to look at different points of view. Studying media and social history empowers us as Web workers by bringing together myriad aspects of humanity in direct relation to the environment of society and technology. Having an understanding of media, communities, communication arts as well as logic, language and computer savvy are all core skills of the best of web designers in today’s medium. Controlling the Message ‘The computer can’t tell you the emotional story. It can give you the exact mathematical design, but what’s missing is the eyebrows.’ – Frank Zappa Media is meant to express an idea. The great media theorist Marshall McLuhan suggests that not only is media interesting because it’s about the expression of ideas, but that the media itself actually shapes the way a given idea is perceived. This is what McLuhan meant when he uttered those famous words: ‘The medium is the message.’ If instead of actually serving a steak to a vegetarian friend, what might a painting of the steak mean instead? Or a sculpture of a cow? Depending upon the form of media in question, the message is altered. Figure 1 Must we know the history of cows to appreciate the steak on our plate? Perhaps not, but if we begin to examine how that meat came to be on the plate, the social, cultural and ideological associations of that cow, we begin to understand the complexity of both medium and message. A piece of steak on my plate makes me happy. A vegetarian friend from India might disagree and even find that that serving her a steak was very insensitive. Takeaway: Getting the message right involves understanding that message in order to direct it to your audience accordingly. A Separate Piece If we revisit the student who only wants technique, while he might become extremely adept at the rendering of an idea, without an understanding of the medium, how is he to have greater control over how that idea is perceived? Ultimately, his creativity is limited and his perspective narrowed, and the teacher has done her student a disservice without challenging him, particularly in a scholastic environment, to think in liberal, creative and ultimately innovative terms. For many years, web pundits including myself have promoted the idea of separation as a core concept within creating effective front-end media for the Web. By this, we’ve meant literal separation of the technologies and documents: Markup for content; CSS for presentation; DOM Scripting for behavior. While the message of separation was an important part of understanding and teaching best practices for manageable, scalable sites, that separation is really just a separation of pieces, not of entire disciplines. For in fact, the medium of the Web is an integrated one. That means each part of the desired message must be supported by the media silos within a given site. The visual designer must study the color, space, shape and placement of visual objects (including type) into a meaningful expression. The underlying markup is ideally written as semantically as possible, promote the meaning of the content it describes. Any interaction and functionality must make the process of the medium support, not take away from, the meaning of the site or Web application. Examination: The Glass Bead Game Figure 2 Figure 2 shows two screenshots of CoreWave’s historic ‘Glass Bead Game.’ Fashioned after Herman Hesse’s novel of the same name, the game was an exploration of how ideas are connected to other ideas via multiple forms of media. It was created for the Web in 1996 using server-side randomization with .htmlx files in order to allow players to see how random associations are in fact not random at all. Takeaway: We can use the medium itself to explore creative ideas, to link us from one idea to the next, and to help us better express those ideas to our audiences. Computers and Human Interaction Since our medium involves computers and human interaction, it does us well to look to foundations of computers and reason. Not long ago I was chatting with Jared Spool on IM about this and that, and he asked me ‘So how do you feel about that?’ This caused me no end of laughter and I instantly quipped back ‘It’s okay by me ELIZA.’ We both enjoyed the joke, but then I tried to share it with another dare I say younger colleague, and the reference was lost. Raise your hand if you got the reference! Some of you will, but many people who come to the Web medium do not get the benefit of such historical references because we are not formally educated in them. Joseph Weizenbaum created the ELIZA program, which emulates a Rogerian Therapist, in 1966. It was an early study of computers and natural human language. I was a little over 2 years old, how about you? Conversation with Eliza There are fortunately a number of ELIZA emulators on the Web. I found one at http://www.chayden.net/eliza/Eliza.html that actually contains the source code (in Java) that makes up the ELIZA script. Figure 3 shows a screen shot of the interaction. ELIZA first welcomes me, says ‘Hello, How do you do. Please state your problem’ and we continue in a short loop of conversation, the computer using cues from my answers to create new questions and leading fragments of conversation. Figure 3 Albeit a very limited demonstration of how humans could interact with a computer in 1966, it’s amusing to play with now and compare it to something as richly interactive as the Microsoft Surface (Figure 4). Here, we see clear Lucite blocks that display projected video. Each side of the block has a different view of the video, so not only does one have to match up the images as they are moving, but do so in the proper directionality. Figure 4 Takeway: the better we know our environment, the more we can alter it to emulate, expand and even supersede our message. Leveraging Holiday Cheer Since most of us at least have a few days off for the holidays now that Christmas is upon us, now’s a perfect time to reflect on ones’ environment and examine the messages within it. Convince your spouse to find you a few audio books for stocking stuffers. Find interactive games to play with your kids and observe them, and yourself, during the interaction. Pour a nice egg-nog and sit down with a copy of Marshall McLuhan’s ‘The Medium is the Massage.’ Leverage that holiday cheer and here’s to a prosperous, interactive new year. 2007 Molly Holzschlag mollyholzschlag 2007-12-22T00:00:00+00:00 https://24ways.org/2007/how-media-studies-can-massage-your-message/ ux
10 Home Kanban for Domestic Bliss My wife is an architect. I’m a leader of big technical teams these days, but for many years after I was a dev I was a project/program manager. Our friends and family used to watch Grand Designs and think that we would make the ideal team — she could design, I could manage the project of building or converting whatever dream home we wanted. Then we bought a house. A Victorian terrace in the north-east of England that needed, well, a fair bit of work. The big decisions were actually pretty easy: yes, we should knock through a double doorway from the dining room to the lounge; yes, we should strip out everything from the utility room and redo it; yes, we should roll back the hideous carpet in the bedrooms upstairs and see if we could restore the original wood flooring. Those could be managed like a project. What couldn’t be was all the other stuff. Incremental improvements are harder to schedule, and in a house that’s over a hundred years old you never know what you’re going to find when you clear away some tiles, or pull up the carpets, or even just spring-clean the kitchen (“Erm, hon? The paint seems to be coming off. Actually, so does the plaster…”). A bit like going in to fix bugs in code or upgrade a machine — sometimes you end up quite far down the rabbit hole. And so, as we tried to fit in those improvements in our evenings and weekends, we found ourselves disagreeing. Arguing, even. We were both trying to do the right thing (make the house better) but since we were fitting it in where we could, we often didn’t get to talk and agree in detail what was needed (exactly how to make the house better). And it’s really frustrating when you stay up late doing something, just to find that your other half didn’t mean that they meant this instead, and so your effort was wasted. Then I saw this tweet from my friend and colleague Jamie Arnold, who was using the same kanban board approach at home as we had instituted at the UK Government Digital Service to manage our portfolio. Mrs Arnold embraces Kanban wall at home. Disagreements about work in progress and priority significantly reduced.. ;) pic.twitter.com/407brMCH— Jamie Arnold (@itsallgonewrong) October 27, 2012 And despite Jamie’s questionable taste in fancy dress outfits (look closely at that board), he is a proper genius when it comes to processes and particularly agile ones. So I followed his example and instituted a home kanban board. What is this kanban of which you speak? Kanban boards are an artefact from lean manufacturing — basically a visualisation of a production process. They are used to show you where your bottlenecks are, or where one part of the process is producing components faster than another part of the process can cope. Identifying the bottlenecks leads you to set work in progress (WIP) limits, so that you get an overall more efficient system. Increasingly kanban is used as an agile software development approach, too, especially where support work (like fixing bugs) needs to be balanced with incremental enhancement (like adding new features). I’m a big advocate of kanban when you have a system that needs to be maintained and improved by the same team at the same time. Rather than the sprint-based approach of scrum (where the next sprint’s stories or features to be delivered are agreed up front), kanban lets individuals deal with incidents or problems that need investigation and bug fixing when urgent and important. Then, when someone has capacity, they can just go to the board and pull down the next feature to develop or test. So, how did we use it? One of the key tenets of kanban is that you visualise your workflow, so we put together a whiteboard with columns: Icebox; To Do Next; In Process; Done; and also a section called Blocked. Then, for each thing that needed to happen in the house, we put it on a Post-it note and initially chucked them all in the Icebox — a collection with no priority assigned yet. Each week we looked at the Icebox and pulled out a set of things that we felt should be done next. This was pulled into the To Do Next column, and then each time either of us had some time, we could just pull a new thing over into the In Process column. We agreed to review at the end of each week and move things to Done together, and to talk about whether this kanban approach was working for us or not. We quickly learned for ourselves why kanban has WIP limits as a key tenet — it’s tempting to pull everything into the To Do Next column, but that’s unrealistic. And trying to do more than one or two things each at a given time isn’t terribly productive owing to the cost of task switching. So we tend to limit our To Do Next to about seven items, and our In Process to about four (a max of two each, basically). We use the Blocked column when something can’t be completed — perhaps we can’t fix something because we discovered we don’t have the required tools or supplies, or if we’re waiting for a call back from a plumber. But it’s nice to put it to one side, knowing that it won’t be forgotten. What helped the most? It wasn’t so much the visualisation that helped us to see what we needed to do, but the conversation that happened when we were agreeing priorities, moving them to In Process and then on to Done made the biggest difference. Getting clear on the order of importance really is invaluable — as is getting clear on what Done really means! The Blocked column is also great, as it helps us keep track of things we need to do outside the house to make sure we can make progress. We also found it really helpful to examine the process itself and figure out whether it was working for us. For instance, one thing we realised is it’s worth tracking some regular tasks that need time invested in them (like taking recycling that isn’t picked up to the recycling centre) and these used to cycle around and around. So they were moved to Done as part of our weekly review, but then immediately put back in the Icebox to float back to the top again at a relevant time. But the best thing of all? That moment where we get to mark something as done! It’s immensely satisfying to review at the end of the week and have a physical marker of the progress you’ve made. All in all, a home kanban board turned out to be a very effective way to pull tasks through stages rather than always trying to plan them out in advance, and definitely made collaboration on our home tasks significantly smoother. Give it a try! 2013 Meri Williams meriwilliams 2013-12-14T00:00:00+00:00 https://24ways.org/2013/home-kanban-for-domestic-bliss/ process
121 Hide And Seek in The Head If you want your JavaScript-enhanced pages to remain accessible and understandable to scripted and noscript users alike, you have to think before you code. Which functionalities are required (ie. should work without JavaScript)? Which ones are merely nice-to-have (ie. can be scripted)? You should only start creating the site when you’ve taken these decisions. Special HTML elements Once you have a clear idea of what will work with and without JavaScript, you’ll likely find that you need a few HTML elements for the noscript version only. Take this example: A form has a nifty bit of Ajax that automatically and silently sends a request once the user enters something in a form field. However, in order to preserve accessibility, the user should also be able to submit the form normally. So the form should have a submit button in noscript browsers, but not when the browser supports sufficient JavaScript. Since the button is meant for noscript browsers, it must be hard-coded in the HTML: <input type="submit" value="Submit form" id="noScriptButton" /> When JavaScript is supported, it should be removed: var checkJS = [check JavaScript support]; window.onload = function () { if (!checkJS) return; document.getElementById('noScriptButton').style.display = 'none'; } Problem: the load event Although this will likely work fine in your testing environment, it’s not completely correct. What if a user with a modern, JavaScript-capable browser visits your page, but has to wait for a huge graphic to load? The load event fires only after all assets, including images, have been loaded. So this user will first see a submit button, but then all of a sudden it’s removed. That’s potentially confusing. Fortunately there’s a simple solution: play a bit of hide and seek in the <head>: var checkJS = [check JavaScript support]; if (checkJS) { document.write('<style>#noScriptButton{display: none}</style>'); } First, check if the browser supports enough JavaScript. If it does, document.write an extra <style> element that hides the button. The difference with the previous technique is that the document.write command is outside any function, and is therefore executed while the JavaScript is being parsed. Thus, the #noScriptButton{display: none} rule is written into the document before the actual HTML is received. That’s exactly what we want. If the rule is already present at the moment the HTML for the submit button is received and parsed, the button is hidden immediately. Even if the user (and the load event) have to wait for a huge image, the button is already hidden, and both scripted and noscript users see the interface they need, without any potentially confusing flashes of useless content. In general, if you want to hide content that’s not relevant to scripted users, give the hide command in CSS, and make sure it’s given before the HTML element is loaded and parsed. Alternative Some people won’t like to use document.write. They could also add an empty <link /> element to the <head> and give it an href attribute once the browser’s JavaScript capabilities have been evaluated. The <link /> element is made to refer to a style sheet that contains the crucial #noScriptButton{display: none}, and everything works fine. Important note: The script needs access to the <link />, and the only way to ensure that access is to include the empty <link /> element before your <script> tag. 2006 Peter-Paul Koch ppk 2006-12-06T00:00:00+00:00 https://24ways.org/2006/hide-and-seek-in-the-head/ code
56 Helping VIPs Care About Performance Making a site feel super fast is the easy part of performance work. Getting people around you to care about site speed is a much bigger challenge. How do we keep the site fast beyond the initial performance work? Keeping very important people like your upper management or clients invested in performance work is critical to keeping a site fast and empowering other designers and developers to contribute. The work to get others to care is so meaty that I dedicated a whole chapter to the topic in my book Designing for Performance. When I speak at conferences, the majority of questions during Q&A are on this topic. When I speak to developers and designers who care about performance, getting other people at one’s organization or agency to care becomes the most pressing question. My primary response to folks who raise this issue is the question: “What metric(s) do your VIPs care about?” This is often met with blank stares and raised eyebrows. But it’s also our biggest clue to what we need to do to help empower others to care about performance and work on it. Every organization and executive is different. This means that three major things vary: the primary metrics VIPs care about; the language they use about measuring success; and how change is enacted. By clueing in to these nuances within your organization, you can get a huge leg up on crafting a successful pitch about performance work. Let’s start with the metric that we should measure. Sure, (most) everybody cares about money - but is that really the metric that your VIPs are looking at each day to measure the success or efficacy of your site? More likely, dollars are the end game, but the metrics or key performance indicators (KPIs) people focus on might be: rate of new accounts created/signups cost of acquiring or retaining a customer visitor return rate visitor bounce rate favoriting or another interaction rate These are just a few examples, but they illustrate how wide-ranging the options are that people care about. I find that developers and designers haven’t necessarily investigated this when trying to get others to care about performance. We often reach for the obvious – money! – but if we don’t use the same kind of language our VIPs are using, we might not get too far. You need to know this before you can make the case for performance work. To find out these metrics or KPIs, start reading through the emails your VIPs are sending within your company. What does it say on company wikis? Are there major dashboards internally that people are looking at where you could find some good metrics? Listen intently in team meetings or thoroughly read annual reports to see what these metrics could be. The second key here is to pick up on language you can effectively copy and paste as you make the case for performance work. You need to be able to reflect back the metrics that people already find important in a way they’ll be able to hear. Once you know your key metrics, it’s time to figure out how to communicate with your VIPs about performance using language that will resonate with them. Let’s start with visit traffic as an example metric that a very important person cares about. Start to dig up research that other people and companies have done that correlates performance and your KPI. For example, cite studies: “When the home page of Google Maps was reduced from 100KB to 70–80KB, traffic went up 10% in the first week, and an additional 25% in the following three weeks.” (source). Read through websites like WPOStats, which collects the spectrum of studies on the impact of performance optimization on user experience and business metrics. Tweet and see if others have done similar research that correlates performance and your site’s main KPI. Once you have collected some research that touches on the same kind of language your VIPs use about the success of your site, it’s time to present it. You can start with something simple, like a qualitative description of the work you’re actively doing to improve the site that translates to improved metrics that your VIPs care about. It can be helpful to append a performance budget to any proposal so you can compare the budget to your site’s reality and how it might positively impact those KPIs folks care about. Words and graphs are often only half the battle when it comes to getting others to care about performance. Often, videos appeal to folks’ emotions in a way that is missed when glancing through charts and graphs. On A List Apart I recently detailed how to create videos of how fast your site loads. Let’s say that your VIPs care about how your site loads on mobile devices; it’s time to show them how your site loads on mobile networks. Open video You can use these videos to make a number of different statements to your VIPs, depending on what they care about: Look at how slow our site loads versus our competitor! Look at how slow our site loads for users in another country! Look at how slow our site loads on mobile networks! Again, you really need to know which metrics your VIPs care about and tune into the language they’re using. If they don’t care about the overall user experience of your site on mobile devices, then showing them how slow your site loads on 3G isn’t going to work. This will be your sales pitch; you need to practice and iterate on the language and highlights that will land best with your audience. To make your sales pitch as solid as possible, gut-check your ideas on how to present it with other co-workers to get their feedback. Read up on how to construct effective arguments and deliver them; do some research and see what others have done at your company when pitching to VIPs. Are slides effective? Memos or emails? Hallway conversations? Sometimes the best way to change people’s minds is by mentioning it in informal chats over coffee. Emulate the other leaders in your organization who are successful at this work. Every organization and very important person is different. Learn what metrics folks truly care about, study the language that they use, and apply what you’ve learned in a way that’ll land with those individuals. It may take time to craft your pitch for performance work over time, but it’s important work to do. If you’re able to figure out how to mirror back the language and metrics VIPs care about, and connect the dots to performance for them, you will have a huge leg up on keeping your site fast in the long run. 2015 Lara Hogan larahogan 2015-12-08T00:00:00+00:00 https://24ways.org/2015/helping-vips-care-about-performance/ business
179 Have a Field Day with HTML5 Forms Forms are usually seen as that obnoxious thing we have to markup and style. I respectfully disagree: forms (on a par with tables) are the most exciting thing we have to work with. Here we’re going to take a look at how to style a beautiful HTML5 form using some advanced CSS and latest CSS3 techniques. I promise you will want to style your own forms after you’ve read this article. Here’s what we’ll be creating: The form. (Icons from Chalkwork Payments) Meaningful markup We’re going to style a simple payment form. There are three main sections on this form: The person’s details The address details The credit card details We are also going to use some of HTML5’s new input types and attributes to create more meaningful fields and use less unnecessary classes and ids: email, for the email field tel, for the telephone field number, for the credit card number and security code required, for required fields placeholder, for the hints within some of the fields autofocus, to put focus on the first input field when the page loads There are a million more new input types and form attributes on HTML5, and you should definitely take a look at what’s new on the W3C website. Hopefully this will give you a good idea of how much more fun form markup can be. A good foundation Each section of the form will be contained within its own fieldset. In the case of the radio buttons for choosing the card type, we will enclose those options in another nested fieldset. We will also be using an ordered list to group each label / input pair. This will provide us with a (kind of) semantic styling hook and it will also make the form easier to read when viewing with no CSS applied: The unstyled form So here’s the markup we are going to be working with: <form id=payment> <fieldset> <legend>Your details</legend> <ol> <li> <label for=name>Name</label> <input id=name name=name type=text placeholder="First and last name" required autofocus> </li> <li> <label for=email>Email</label> <input id=email name=email type=email placeholder="example@domain.com" required> </li> <li> <label for=phone>Phone</label> <input id=phone name=phone type=tel placeholder="Eg. +447500000000" required> </li> </ol> </fieldset> <fieldset> <legend>Delivery address</legend> <ol> <li> <label for=address>Address</label> <textarea id=address name=address rows=5 required></textarea> </li> <li> <label for=postcode>Post code</label> <input id=postcode name=postcode type=text required> </li> <li> <label for=country>Country</label> <input id=country name=country type=text required> </li> </ol> </fieldset> <fieldset> <legend>Card details</legend> <ol> <li> <fieldset> <legend>Card type</legend> <ol> <li> <input id=visa name=cardtype type=radio> <label for=visa>VISA</label> </li> <li> <input id=amex name=cardtype type=radio> <label for=amex>AmEx</label> </li> <li> <input id=mastercard name=cardtype type=radio> <label for=mastercard>Mastercard</label> </li> </ol> </fieldset> </li> <li> <label for=cardnumber>Card number</label> <input id=cardnumber name=cardnumber type=number required> </li> <li> <label for=secure>Security code</label> <input id=secure name=secure type=number required> </li> <li> <label for=namecard>Name on card</label> <input id=namecard name=namecard type=text placeholder="Exact name as on the card" required> </li> </ol> </fieldset> <fieldset> <button type=submit>Buy it!</button> </fieldset> </form> Making things look nice First things first, so let’s start by adding some defaults to our form by resetting the margins and paddings of the elements and adding a default font to the page: html, body, h1, form, fieldset, legend, ol, li { margin: 0; padding: 0; } body { background: #ffffff; color: #111111; font-family: Georgia, "Times New Roman", Times, serif; padding: 20px; } Next we are going to style the form element that is wrapping our fields: form#payment { background: #9cbc2c; -moz-border-radius: 5px; -webkit-border-radius: 5px; border-radius: 5px; padding: 20px; width: 400px; } We will also remove the border from the fieldset and apply some bottom margin to it. Using the :last-of-type pseudo-class, we remove the bottom margin of the last fieldset — there is no need for it: form#payment fieldset { border: none; margin-bottom: 10px; } form#payment fieldset:last-of-type { margin-bottom: 0; } Next we’ll make the legends big and bold, and we will also apply a light-green text-shadow, to add that little extra special detail: form#payment legend { color: #384313; font-size: 16px; font-weight: bold; padding-bottom: 10px; text-shadow: 0 1px 1px #c0d576; } Our legends are looking great, but how about adding a clear indication of how many steps our form has? Instead of adding that manually to every legend, we can use automatically generated counters. To add a counter to an element, we have to use either the :before or :after pseudo-elements to add content via CSS. We will follow these steps: create a counter using the counter-reset property on the form element call the counter with the content property (using the same name we’ve created before) with the counter-incremet property, indicate that for each element that matches our selector, that counter will be increased by 1 form#payment > fieldset > legend:before { content: "Step " counter(fieldsets) ": "; counter-increment: fieldsets; } Finally, we need to change the style of the legend that is part of the radio buttons group, to make it look like a label: form#payment fieldset fieldset legend { color: #111111; font-size: 13px; font-weight: normal; padding-bottom: 0; } Styling the lists For our list elements, we’ll just add some nice rounded corners and semi-transparent border and background. Because we are using RGBa colors, we should provide a fallback for browsers that don’t support them (that comes before the RBGa color). For the nested lists, we will remove these properties because they would be overlapping: form#payment ol li { background: #b9cf6a; background: rgba(255,255,255,.3); border-color: #e3ebc3; border-color: rgba(255,255,255,.6); border-style: solid; border-width: 2px; -moz-border-radius: 5px; -webkit-border-radius: 5px; border-radius: 5px; line-height: 30px; list-style: none; padding: 5px 10px; margin-bottom: 2px; } form#payment ol ol li { background: none; border: none; float: left; } Form controls Now we only need to style our labels, inputs and the button element. All our labels will look the same, with the exception of the one for the radio elements. We will float them to the left and give them a width. For the credit card type labels, we will add an icon as the background, and override some of the properties that aren’t necessary. We will be using the attribute selector to specify the background image for each label — in this case, we use the for attribute of each label. To add an extra user-friendly detail, we’ll add a cursor: pointer to the radio button labels on the :hover state, so the user knows that he can simply click them to select that option. form#payment label { float: left; font-size: 13px; width: 110px; } form#payment fieldset fieldset label { background:none no-repeat left 50%; line-height: 20px; padding: 0 0 0 30px; width: auto; } form#payment label[for=visa] { background-image: url(visa.gif); } form#payment label[for=amex] { background-image: url(amex.gif); } form#payment label[for=mastercard] { background-image: url(mastercard.gif); } form#payment fieldset fieldset label:hover { cursor: pointer; } Almost there! Now onto the input elements. Here we want to match all inputs, except for the radio ones, and the textarea. For that we will use the negation pseudo-class (:not()). With it we can target all input elements except for the ones with type of radio. We will also make sure to add some :focus styles and add the appropriate styling for the radio inputs: form#payment input:not([type=radio]), form#payment textarea { background: #ffffff; border: none; -moz-border-radius: 3px; -webkit-border-radius: 3px; -khtml-border-radius: 3px; border-radius: 3px; font: italic 13px Georgia, "Times New Roman", Times, serif; outline: none; padding: 5px; width: 200px; } form#payment input:not([type=submit]):focus, form#payment textarea:focus { background: #eaeaea; } form#payment input[type=radio] { float: left; margin-right: 5px; } And finally we come to our submit button. To it, we will just add some nice typography and text-shadow, align it to the center of the form and give it some background colors for its different states: form#payment button { background: #384313; border: none; -moz-border-radius: 20px; -webkit-border-radius: 20px; -khtml-border-radius: 20px; border-radius: 20px; color: #ffffff; display: block; font: 18px Georgia, "Times New Roman", Times, serif; letter-spacing: 1px; margin: auto; padding: 7px 25px; text-shadow: 0 1px 1px #000000; text-transform: uppercase; } form#payment button:hover { background: #1e2506; cursor: pointer; } And that’s it! See the completed form. This form will not look the same on every browser. Internet Explorer and Opera don’t support border-radius (at least not for now); the new input types are rendered as just normal inputs on some browsers; and some of the most advanced CSS, like the counter, :last-of-type or text-shadow are not supported on some browsers. But that doesn’t mean you can’t use them right now, and simplify your development process. My gift to you! 2009 Inayaili de León Persson inayailideleon 2009-12-03T00:00:00+00:00 https://24ways.org/2009/have-a-field-day-with-html5-forms/ code
316 Have Your DOM and Script It Too When working with the XMLHttpRequest object it appears you can only go one of three ways: You can stay true to the colorful moniker du jour and stick strictly to the responseXML property You can play with proprietary – yet widely supported – fire and inject the value of responseText property into the innerHTML of an element of your choosing Or you can be eval() and parse JSON or arbitrary JavaScript delivered via responseText But did you know that there’s a fourth option giving you the best of the latter two worlds? Mint uses this unmentioned approach to grab fresh HTML and run arbitrary JavaScript simultaneously. Without relying on eval(). “But wait-”, you might say, “when would I need to do this?” Besides the example below this technique is handy for things like tab groups that need initialization onload but miss the main onload event handler by a mile thanks to asynchronous scripting. Consider the problem Originally Mint used option 2 to refresh or load new tabs into individual Pepper panes without requiring a full roundtrip to the server. This was all well and good until I introduced the new Client Mode which when enabled allows anyone to view a Mint installation without being logged in. If voyeurs are afoot as Client Mode is disabled, the next time they refresh a pane the entire login page is inserted into the current document. That’s not very helpful so I needed a way to redirect the current document to the login page. Enter the solution Wouldn’t it be cool if browsers interpreted the contents of script tags crammed into innerHTML? Sure, but unfortunately, that just wasn’t meant to be. However like the body element, image elements have an onload event handler. When the image has fully loaded the handler runs the code applied to it. See where I’m going with this? By tacking a tiny image (think single pixel, transparent spacer gif – shudder) onto the end of the HTML returned by our Ajax call, we can smuggle our arbitrary JavaScript into the existing document. The image is added to the DOM, and our stowaway can go to town. <p>This is the results of our Ajax call.</p> <img src="../images/loaded.gif" alt="" onload="alert('Now that I have your attention...');" /> Please be neat So we’ve just jammed some meaningless cruft into our DOM. If our script does anything with images this addition could have some unexpected side effects. (Remember The Fly?) So in order to save that poor, unsuspecting element whose innerHTML we just swapped out from sharing Jeff Goldblum’s terrible fate we should tidy up after ourselves. And by using the removeChild method we do just that. <p>This is the results of our Ajax call.</p> <img src="../images/loaded.gif" alt="" onload="alert('Now that I have your attention...');this.parentNode.removeChild(this);" /> 2005 Shaun Inman shauninman 2005-12-24T00:00:00+00:00 https://24ways.org/2005/have-your-dom-and-script-it-too/ code
309 HTTP/2 Server Push and Service Workers: The Perfect Partnership Being a web developer today is exciting! The web has come a long way since its early days and there are so many great technologies that enable us to build faster, better experiences for our users. One of these technologies is HTTP/2 which has a killer feature known as HTTP/2 Server Push. During this year’s Chrome Developer Summit, I watched a really informative talk by Sam Saccone, a Software Engineer on the Google Chrome team. He gave a talk entitled Planning for Performance, and one of the topics that he covered immediately piqued my interest; the idea that HTTP/2 Server Push and Service Workers were the perfect web performance combination. If you’ve never heard of HTTP/2 Server Push before, fear not - it’s not as scary as it sounds. HTTP/2 Server Push simply allows the server to send data to the browser without having to wait for the browser to explicitly request it first. In this article, I am going to run through the basics of HTTP/2 Server Push and show you how, when combined with Service Workers, you can deliver the ultimate in web performance to your users. What is HTTP/2 Server Push? When a user navigates to a URL, a browser will make an HTTP request for the underlying web page. The browser will then scan the contents of the HTML document for any assets that it may need to retrieve such as CSS, JavaScript or images. Once it finds any assets that it needs, it will then make multiple HTTP requests for each resource that it needs and begin downloading one by one. While this approach works well, the problem is that each HTTP request means more round trips to the server before any data arrives at the browser. These extra round trips take time and can make your web pages load slower. Before we go any further, let’s see what this might look like when your browser makes a request for a web page. If you were to view this in the developer tools of your browser, it might look a little something like this: As you can see from the image above, once the HTML file has been downloaded and parsed, the browser then makes HTTP requests for any assets that it needs. This is where HTTP/2 Server Push comes in. The idea behind HTTP/2 Server Push is that when the browser requests a web page from the server, the server already knows about all the assets that are needed for the web page and “pushes” it to browser. This happens when the first HTTP request for the web page takes place and it eliminates an extra round trip, making your site faster. Using the same example above, let’s “push” the JavaScript and CSS files instead of waiting for the browser to request them. The image below gives you an idea of what this might look like. Whoa, that looks different - let’s break it down a little. Firstly, you can see that the JavaScript and CSS files appear earlier in the waterfall chart. You might also notice that the loading times for the files are extremely quick. The browser doesn’t need to make an extra HTTP request to the server, instead it receives the critical files it needs all at once. Much better! There are a number of different approaches when it comes to implementing HTTP/2 Server Push. Adoption is growing and many commercial CDNs such as Akamai and Cloudflare already offer support for Server Push. You can even roll your own implementation depending on your environment. I’ve also previously blogged about building a basic HTTP/2 Server Push example using Node.js. In this post, I’m not going to dive into how to implement HTTP/2 Server Push as that is an entire post in itself! However, I do recommend reading this article to find out more about the inner workings. HTTP/2 Server Push is awesome, but it isn’t a magic bullet. It is fantastic for improving the load time of a web page when it first loads for a user, but it isn’t that great when they request the same web page again. The reason for this is that HTTP/2 Server Push is not cache “aware”. This means that the server isn’t aware about the state of your client. If you’ve visited a web page before, the server isn’t aware of this and will push the resource again anyway, regardless of whether or not you need it. HTTP/2 Server Push effectively tells the browser that it knows better and that the browser should receive the resources whether it needs them or not. In theory browsers can cancel HTTP/2 Server Push requests if they’re already got something in cache but unfortunately no browsers currently support it. The other issue is that the server will have already started to send some of the resource to the browser by the time the cancellation occurs. HTTP/2 Server Push & Service Workers So where do Service Workers fit in? Believe it or not, when combined together HTTP/2 Server Push and Service Workers can be the perfect web performance partnership. If you’ve not heard of Service Workers before, they are worker scripts that run in the background of your website. Simply put, they act as middleman between the client and the browser and enable you to intercept any network requests that come and go from the browser. They are packed with useful features such as caching, push notifications, and background sync. Best of all, they are written in JavaScript, making it easy for web developers to understand. Using Service Workers, you can easily cache assets on a user’s device. This means when a browser makes an HTTP request for an asset, the Service Worker is able to intercept the request and first check if the asset already exists in cache on the users device. If it does, then it can simply return and serve them directly from the device instead of ever hitting the server. Let’s stop for a second and analyse what that means. Using HTTP/2 Server Push, you are able to push critical assets to the browser before the browser requests them. Then, using Service Workers you are able to cache these resources so that the browser never needs to make a request to the server again. That means a super fast first load and an even faster second load! Let’s put this into action. The following HTML code is a basic web page that retrieves a few images and two JavaScript files. <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>HTTP2 Push Demo</title> </head> <body> <h1>HTTP2 Push</h1> <img src="./images/beer-1.png" width="200" height="200" /> <img src="./images/beer-2.png" width="200" height="200" /> <br> <br> <img src="./images/beer-3.png" width="200" height="200" /> <img src="./images/beer-4.png" width="200" height="200" /> <!-- Scripts --> <script async src="./js/promise.min.js"></script> <script async src="./js/fetch.js"></script> <script> // Register the service worker if ('serviceWorker' in navigator) { navigator.serviceWorker.register('./service-worker.js').then(function(registration) { // Registration was successful console.log('ServiceWorker registration successful with scope: ', registration.scope); }).catch(function(err) { // registration failed :( console.log('ServiceWorker registration failed: ', err); }); } </script> </body> </html> In the HTML code above, I am registering a Service Worker file named service-worker.js. In order to start caching assets, I am going to use the Service Worker toolbox . It is a lightweight helper library to help you get started creating your own Service Workers. Using this library, we can actually cache the base web page with the path /push. The Service Worker Toolbox comes with a built-in routing system which is based on the same routing as Express. With just a few lines of code, you can start building powerful caching patterns. I’ve add the following code to the service-worker.js file. (global => { 'use strict'; // Load the sw-toolbox library. importScripts('/js/sw-toolbox/sw-toolbox.js'); // The route for any requests toolbox.router.get('/push', global.toolbox.fastest); toolbox.router.get('/images/(.*)', global.toolbox.fastest); toolbox.router.get('/js/(.*)', global.toolbox.fastest); // Ensure that our service worker takes control of the page as soon as possible. global.addEventListener('install', event => event.waitUntil(global.skipWaiting())); global.addEventListener('activate', event => event.waitUntil(global.clients.claim())); })(self); Let’s break this code down further. Around line 4, I am importing the Service Worker toolbox. Next, I am specifying a route that will listen to any requests that match the URL /push. Because I am also interested in caching the images and JavaScript for that page, I’ve told the toolbox to listen to these routes too. The best thing about the code above is that if any of the assets exist in cache, we will instantly return the cached version instead of waiting for it to download. If the asset doesn’t exist in cache, the code above will add it into cache so that we can retrieve it when it’s needed again. You may also notice the code global.toolbox.fastest - this is important because gives you the compromise of fulfilling from the cache immediately, while firing off an additional HTTP request updating the cache for the next visit. But what does this mean when combined with HTTP/2 Server Push? Well, it means that on the first load of the web page, you are able to “push” everything to the user at once before the browser has even requested it. The Service Worker activates and starts caching the assets on the users device. The next time a user visits the page, the Service Worker will intercept the request and serve the asset directly from cache. Amazing! Using this technique, the waterfall chart for a repeat visit should look like the image below. If you look closely at the image above, you’ll notice that the web page returns almost instantly without ever making an HTTP request over the network. Using the Service Worker library, we cached the base page for the route /push, which allowed us to retrieve this directly from cache. Whether used on their own or combined together, the best thing about these two features is that they are the perfect progressive enhancement. If your user’s browser doesn’t support them, they will simply fall back to HTTP/1.1 without Service Workers. Your users may not experience as fast a load time as they would with these two techniques, but it would be no different from their normal experience. HTTP/2 Server Push and Service Workers are really the perfect partners when it comes to web performance. Summary When used correctly, HTTP/2 Server Push and Service Workers can have a positive impact on your site’s load times. Together they mean super fast first load times and even faster repeat views to a web page. Whilst this technique is really effective, it’s worth noting that HTTP/2 push is not a magic bullet. Think about the situations where it might make sense to use it and don’t just simply “push” everything; it could actually lead to having slower page load times. If you’d like to learn more about the rules of thumb for HTTP/2 Server Push, I recommend reading this article for more information. All of the code in this example is available on my Github repo - if you have any questions, please submit an issue and I’ll get back to you as soon as possible. If you’d like to learn more about this technique and others relating to HTTP/2, I highly recommend watching Sam Saccone’s talk at this years Chrome Developer Summit. I’d also like to say a massive thank you to Robin Osborne, Andy Davies and Jeffrey Posnick for helping me review this article before putting it live! 2016 Dean Hume deanhume 2016-12-15T00:00:00+00:00 https://24ways.org/2016/http2-server-push-and-service-workers/ code
177 HTML5: Tool of Satan, or Yule of Santa? It would lead to unseasonal arguments to discuss the title of this piece here, and the arguments are as indigestible as the fourth turkey curry of the season, so we’ll restrict our article to the practical rather than the philosophical: what HTML5 can you reasonably expect to be able to use reliably cross-browser in the early months of 2010? The answer is that you can use more than you might think, due to the seasonal tinsel of feature-detection and using the sparkly pixie-dust of IE-only VML (but used in a way that won’t damage your Elf). Canvas canvas is a 2D drawing API that defines a blank area of the screen of arbitrary size, and allows you to draw on it using JavaScript. The pictures can be animated, such as in this canvas mashup of Wolfenstein 3D and Flickr. (The difference between canvas and SVG is that SVG uses vector graphics, so is infinitely scalable. It also keeps a DOM, whereas canvas is just pixels so you have to do all your own book-keeping yourself in JavaScript if you want to know where aliens are on screen, or do collision detection.) Previously, you needed to do this using Adobe Flash or Java applets, requiring plugins and potentially compromising keyboard accessibility. Canvas drawing is supported now in Opera, Safari, Chrome and Firefox. The reindeer in the corner is, of course, Internet Explorer, which currently has zero support for canvas (or SVG, come to that). Now, don’t pull a face like all you’ve found in your Yuletide stocking is a mouldy satsuma and a couple of nuts—that’s not the end of the story. Canvas was originally an Apple proprietary technology, and Internet Explorer had a similar one called Vector Markup Language which was submitted to the W3C for standardisation in 1998 but which, unlike canvas, was not blessed with retrospective standardisation. What you need, then, is some way for Internet Explorer to translate canvas to VML on-the-fly, while leaving the other, more standards-compliant browsers to use the HTML5. And such a way exists—it’s a JavaScript library called excanvas. It’s downloadable from http://code.google.com/p/explorercanvas/ and it’s simple to include it via a conditional comment in the head for IE: <!--[if IE]> <script src="excanvas.js"></script> <![endif]--> Simply include this, and your canvas will be natively supported in the modern browsers (and the library won’t even be downloaded) whereas IE will suddenly render your canvas using its own VML engine. Be sure, however, to check it carefully, as the IE JavaScript engine isn’t so fast and you’ll need to be sure that performance isn’t too degraded to use. Forms Since the beginning of the Web, developers have been coding forms, and then writing JavaScript to check whether an input is a correctly formed email address, URL, credit card number or conforms to some other pattern. The cumulative labour of the world’s developers over the last 15 years makes whizzing round in a sleigh and delivering presents seem like popping to the corner shop in comparison. With HTML5, that’s all about to change. As Yaili began to explore on Day 3, a host of new attributes to the input element provide built-in validation for email address formats (input type=email), URLs (input type=url), any pattern that can be expressed with a JavaScript-syntax regex (pattern="[0-9][A-Z]{3}") and the like. New attributes such as required, autofocus, input type=number min=3 max=50 remove much of the tedious JavaScript from form validation. Other, really exciting input types are available (see all input types). The datalist is reminiscent of a select box, but allows the user to enter their own text if they don’t want to choose one of the pre-defined options. input type=range is rendered as a slider, while input type=date pops up a date picker, all natively in the browser with no JavaScript required at all. Currently, support is most complete in an experimental implementation in Opera and a number of the new attributes in Webkit-based browsers. But don’t let that stop you! The clever thing about the specification of the new Web Forms is that all the new input types are attributes (rather than elements). input defaults to input type=text, so if a browser doesn’t understand a new HTML5 type, it gracefully degrades to a plain text input. So where does that leave validation in those browsers that don’t support Web Forms? The answer is that you don’t retire your pre-existing JavaScript validation just yet, but you leave it as a fallback after doing some feature detection. To detect whether (say) input type=email is supported, you make a new input type=email with JavaScript but don’t add it to the page. Then, you interrogate your new element to find out what its type attribute is. If it’s reported back as “email”, then the browser supports the new feature, so let it do its work and don’t bring in any JavaScript validation. If it’s reported back as “text”, it’s fallen back to the default, indicating that it’s not supported, so your code should branch to your old validation routines. Alternatively, use the small (7K) Modernizr library which will do this work for you and give you JavaScript booleans like Modernizr.inputtypes[email] set to true or false. So what does this buy you? Well, first and foremost, you’re future-proofing your code for that time when all browsers support these hugely useful additions to forms. Secondly, you buy a usability and accessibility win. Although it’s tempting to style the stuffing out of your form fields (which can, incidentally, lead to madness), whatever your branding people say, it’s better to leave forms as close to the browser defaults as possible. A browser’s slider and date pickers will be the same across different sites, making it much more comprehensible to users. And, by using native controls rather than faking sliders and date pickers with JavaScript, your forms are much more likely to be accessible to users of assistive technology. HTML5 DOCTYPE You can use the new DOCTYPE !doctype html now and – hey presto – you’re writing HTML5, as it’s pretty much a superset of HTML4. There are some useful advantages to doing this. The first is that the HTML5 validator (I use http://html5.validator.nu) also validates ARIA information, whereas the HTML4 validator doesn’t, as ARIA is a new spec developed after HTML4. (Actually, it’s more accurate to say that it doesn’t validate your ARIA attributes, but it doesn’t automatically report them as an error.) Another advantage is that HTML5 allows tabindex as a global attribute (that is, on any element). Although originally designed as an accessibility bolt-on, I ordinarily advise you don’t use it; a well-structured page should provide a logical tab order through links and form fields already. However, tabindex="-1" is a legal value in HTML5 as it allows for the element to be programmatically focussable by JavaScript. It’s also very useful for correcting a bug in Internet Explorer when used with a keyboard; in-page links go nowhere if the destination doesn’t have a proprietary property called hasLayout set or a tabindex of -1. So, whether it is the tool of Satan or yule of Santa, HTML5 is just around the corner. Some you can use now, and by the end of 2010 I predict you’ll be able to use a whole lot more as new browser versions are released. 2009 Bruce Lawson brucelawson 2009-12-05T00:00:00+00:00 https://24ways.org/2009/html5-tool-of-satan-or-yule-of-santa/ code
80 HTML5 Video Bumpers Video is a bigger part of the web experience than ever before. With native browser support for HTML5 video elements freeing us from the tyranny of plugins, and the availability of faster internet connections to the workplace, home and mobile networks, it’s now pretty straightforward to publish video in a way that can be consumed in all sorts of ways on all sorts of different web devices. I recently worked on a project where the client had shot some dedicated video shorts to publish on their site. They also had some five-second motion graphics produced to top and tail the videos with context and branding. This pretty common requirement is a great idea on the web, where a user might land at your video having followed a link and be viewing a page without much context. Known as bumpers, these short introduction clips help brand a video and make it look a lot more professional. Adding bumpers to a video The simplest way to add bumpers to a video would be to edit them on to the start and end of the video file itself. Cooking the bumpers into the video file is easy, but should you ever want to update them it can become a real headache. If the branding needs updating, for example, you’d need to re-edit and re-encode all your videos. Not a fun task. What if the bumpers could be added dynamically? That would enable you to use the same bumper for multiple videos (decreasing download time for users who might watch more than one) and to update the bumpers whenever you wanted. You could change them seasonally, update them for special promotions, run different advertising slots, perform multivariate testing, or even target different bumpers to different users. The trade-off, of course, is that if you dynamically add your bumpers, there’s a chance that a user in a given circumstance might not see the bumper. For example, if the main video feature was uploaded to YouTube, you’d have no way to control the playback. As always, you need to weigh up the pros and cons and make your choice. HTML5 bumpers If you wanted to dynamically add bumpers to your HTML5 video, how would you go about it? That was the question I found myself needing to answer for this particular client project. My initial thought was to treat it just like an image slideshow. If I were building a slideshow that moved between images, I’d use CSS absolute positioning with z-index to stack the images up on top of each other in a pile, with the first image on top. To transition to the second image, I’d use JavaScript to fade the top image out, revealing the second image beneath it. Now that video is just a native object in the DOM, just like an image, why not do the same? Stack the videos up with the opening bumper on top, listen for the video’s onended event, and fade it out to reveal the main feature behind. Good idea, right? Wrong Remember that this is the web. It’s never going to be that easy. The problem here is that many non-desktop devices use native, dedicated video players. Think about watching a video on a mobile phone – when you play the video, the phone often goes full-screen in its native player, leaving the web page behind. There’s no opportunity to fade or switch z-index, as the video isn’t being viewed in the page. Your page is left powerless. Powerless! So what can we do? What can we control? Those of us with particularly long memories might recall a time before CSS, when we’d have to use JavaScript to perform image rollovers. As CSS background images weren’t a practical reality, we would use lots of <img> elements, and perform a rollover by modifying the src attribute of the image. Turns out, this old trick of modifying the source can help us out with video, too. In most cases, modifying the src attribute of a <video> element, or perhaps more likely the src attribute of a source element, will swap from one video to another. Swappin’ it Let’s take a deliberately simple example of a super-basic video tag: <video src="mycat.webm" controls>no fallback coz i is lame, innit.</video> We could very simply write a script to find all video tags and give them a new src to show our bumper. <script> var videos, i, l; videos = document.getElementsByTagName('video'); for(i=0, l=videos.length; i<l; i++) { videos[i].setAttribute('src', 'bumper-in.webm'); } </script> View the example in a browser with WebM support. You’ll see that the video is swapped out for the opening bumper. Great! Beefing it up Of course, we can’t just publish video in one format. In practical use, you need a <video> element with multiple <source> elements containing your different source formats. <video controls> <source src="mycat.mp4" type="video/mp4" /> <source src="mycat.webm" type="video/webm" /> <source src="mycat.ogv" type="video/ogg" /> </video> This time, our script needs to loop through the sources, not the videos. We’ll use a regular expression replacement to swap out the file name while maintaining the correct file extension. <script> var sources, i, l, orig; sources = document.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('src'); sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2')); // reload the video sources[i].parentNode.load(); } </script> The difference this time is that when changing the src of a <source> we need to call the .load() method on the video to get it to acknowledge the change. See the code in action, this time in a wider range of browsers. But, my video! I guess we should get the original video playing again. Keeping the same markup, we need to modify the script to do two things: Store the original src in a data- attribute so we can access it later Add an event listener so we can detect the end of the bumper playing, and load the original video back in As we need to loop through the videos this time to add the event listener, I’ve moved the .load() call into that loop. It’s a bit more efficient to call it only once after modifying all the video’s sources. <script> var videos, sources, i, l, orig; sources = document.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('src'); sources[i].setAttribute('data-orig', orig); sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2')); } videos = document.getElementsByTagName('video'); for(i=0, l=videos.length; i<l; i++) { videos[i].load(); videos[i].addEventListener('ended', function(){ sources = this.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('data-orig'); if (orig) { sources[i].setAttribute('src', orig); } sources[i].setAttribute('data-orig',''); } this.load(); this.play(); }); } </script> Again, view the example to see the bumper play, followed by our spectacular main feature. (That’s my cat, Widget. His interests include sleeping and internet marketing.) Tidying things up The final thing to do is add our closing bumper after the main video has played. This involves the following changes: We need to keep track of whether the src has been changed, so we only play the video if it’s changed. I’ve added the modified variable to track this, and it stops us getting into a situation where the video just loops forever. Add an else to the event listener, for when the orig is false (so the main feature has been playing) to load in the end bumper. We also check that we’re not already playing the end bumper. Because looping. <script> var videos, sources, i, l, orig, current, modified; sources = document.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('src'); sources[i].setAttribute('data-orig', orig); sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2')); } videos = document.getElementsByTagName('video'); for(i=0, l=videos.length; i<l; i++) { videos[i].load(); modified = false; videos[i].addEventListener('ended', function(){ sources = this.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('data-orig'); if (orig) { sources[i].setAttribute('src', orig); modified = true; }else{ current = sources[i].getAttribute('src'); if (current.indexOf('bumper-out')==-1) { sources[i].setAttribute('src', current.replace(/([w]+).(w+)/, 'bumper-out.$2')); modified = true; }else{ this.pause(); modified = false; } } sources[i].setAttribute('data-orig',''); } if (modified) { this.load(); this.play(); } }); } </script> Yo ho ho, that’s a lot of JavaScript. See it in action – you should get a bumper, the cat video, and an end bumper. Of course, this code works fine for demonstrating the principle, but it’s very procedural. Nothing wrong with that, but to do something similar in production, you’d probably want to make the code more modular to ease maintainability. Besides, you may want to use a framework, rather than basic JavaScript. The end credits One really important principle here is that of progressive enhancement. If the browser doesn’t support JavaScript, the user won’t see your bumper, but they will get the main video. If the browser supports JavaScript but doesn’t allow you to modify the src (as was the case with older versions of iOS), the user won’t see your bumper, but they will get the main video. If a search engine or social media bot grabs your page and looks for content, they won’t see your bumper, but they will get the main video – which is absolutely what you want. This means that if the bumper is absolutely crucial, you may still need to cook it into the video. However, for many applications, running it dynamically can work quite well. As always, it comes down to three things: Measure your audience: know how people access your site Test the solution: make sure it works for your audience Plan for failure: it’s the web and that’s how things work ‘round these parts But most of all play around with it, have fun and build something awesome. 2012 Drew McLellan drewmclellan 2012-12-01T00:00:00+00:00 https://24ways.org/2012/html5-video-bumpers/ code
18 Grunt for People Who Think Things Like Grunt are Weird and Hard Front-end developers are often told to do certain things: Work in as small chunks of CSS and JavaScript as makes sense to you, then concatenate them together for the production website. Compress your CSS and minify your JavaScript to make their file sizes as small as possible for your production website. Optimize your images to reduce their file size without affecting quality. Use Sass for CSS authoring because of all the useful abstraction it allows. That’s not a comprehensive list of course, but those are the kind of things we need to do. You might call them tasks. I bet you’ve heard of Grunt. Well, Grunt is a task runner. Grunt can do all of those things for you. Once you’ve got it set up, which isn’t particularly difficult, those things can happen automatically without you having to think about them again. But let’s face it: Grunt is one of those fancy newfangled things that all the cool kids seem to be using but at first glance feels strange and intimidating. I hear you. This article is for you. Let’s nip some misconceptions in the bud right away Perhaps you’ve heard of Grunt, but haven’t done anything with it. I’m sure that applies to many of you. Maybe one of the following hang-ups applies to you. I don’t need the things Grunt does You probably do, actually. Check out that list up top. Those things aren’t nice-to-haves. They are pretty vital parts of website development these days. If you already do all of them, that’s awesome. Perhaps you use a variety of different tools to accomplish them. Grunt can help bring them under one roof, so to speak. If you don’t already do all of them, you probably should and Grunt can help. Then, once you are doing those, you can keep using Grunt to do more for you, which will basically make you better at doing your job. Grunt runs on Node.js — I don’t know Node You don’t have to know Node. Just like you don’t have to know Ruby to use Sass. Or PHP to use WordPress. Or C++ to use Microsoft Word. I have other ways to do the things Grunt could do for me Are they all organized in one place, configured to run automatically when needed, and shared among every single person working on that project? Unlikely, I’d venture. Grunt is a command line tool — I’m just a designer I’m a designer too. I prefer native apps with graphical interfaces when I can get them. But I don’t think that’s going to happen with Grunt1. The extent to which you need to use the command line is: Navigate to your project’s directory. Type grunt and press Return. After set-up, that is, which again isn’t particularly difficult. OK. Let’s get Grunt installed Node is indeed a prerequisite for Grunt. If you don’t have Node installed, don’t worry, it’s very easy. You literally download an installer and run it. Click the big Install button on the Node website. You install Grunt on a per-project basis. Go to your project’s folder. It needs a file there named package.json at the root level. You can just create one and put it there. package.json at root The contents of that file should be this: { "name": "example-project", "version": "0.1.0", "devDependencies": { "grunt": "~0.4.1" } } Feel free to change the name of the project and the version, but the devDependencies thing needs to be in there just like that. This is how Node does dependencies. Node has a package manager called NPM (Node packaged modules) for managing Node dependencies (like a gem for Ruby if you’re familiar with that). You could even think of it a bit like a plug-in for WordPress. Once that package.json file is in place, go to the terminal and navigate to your folder. Terminal rubes like me do it like this: Terminal rube changing directories Then run the command: npm install After you’ve run that command, a new folder called node_modules will show up in your project. Example of node_modules folder The other files you see there, README.md and LICENSE are there because I’m going to put this project on GitHub and that’s just standard fare there. The last installation step is to install the Grunt CLI (command line interface). That’s what makes the grunt command in the terminal work. Without it, typing grunt will net you a “Command Not Found”-style error. It is a separate installation for efficiency reasons. Otherwise, if you had ten projects you’d have ten copies of Grunt CLI. This is a one-liner again. Just run this command in the terminal: npm install -g grunt-cli You should close and reopen the terminal as well. That’s a generic good practice to make sure things are working right. Kinda like restarting your computer after you install a new application was in the olden days. Let’s make Grunt concatenate some files Perhaps in our project there are three separate JavaScript files: jquery.js – The library we are using. carousel.js – A jQuery plug-in we are using. global.js – Our authored JavaScript file where we configure and call the plug-in. In production, we would concatenate all those files together for performance reasons (one request is better than three). We need to tell Grunt to do this for us. But wait. Grunt actually doesn’t do anything all by itself. Remember Grunt is a task runner. The tasks themselves we will need to add. We actually haven’t set up Grunt to do anything yet, so let’s do that. The official Grunt plug-in for concatenating files is grunt-contrib-concat. You can read about it on GitHub if you want, but all you have to do to use it on your project is to run this command from the terminal (it will henceforth go without saying that you need to run the given commands from your project’s root folder): npm install grunt-contrib-concat --save-dev A neat thing about doing it this way: your package.json file will automatically be updated to include this new dependency. Open it up and check it out. You’ll see a new line: "grunt-contrib-concat": "~0.3.0" Now we’re ready to use it. To use it we need to start configuring Grunt and telling it what to do. You tell Grunt what to do via a configuration file named Gruntfile.js2 Just like our package.json file, our Gruntfile.js has a very special format that must be just right. I wouldn’t worry about what every word of this means. Just check out the format: module.exports = function(grunt) { // 1. All configuration goes here grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), concat: { // 2. Configuration for concatinating files goes here. } }); // 3. Where we tell Grunt we plan to use this plug-in. grunt.loadNpmTasks('grunt-contrib-concat'); // 4. Where we tell Grunt what to do when we type "grunt" into the terminal. grunt.registerTask('default', ['concat']); }; Now we need to create that configuration. The documentation can be overwhelming. Let’s focus just on the very simple usage example. Remember, we have three JavaScript files we’re trying to concatenate. We’ll list file paths to them under src in an array of file paths (as quoted strings) and then we’ll list a destination file as dest. The destination file doesn’t have to exist yet. It will be created when this task runs and squishes all the files together. Both our jquery.js and carousel.js files are libraries. We most likely won’t be touching them. So, for organization, we’ll keep them in a /js/libs/ folder. Our global.js file is where we write our own code, so that will be right in the /js/ folder. Now let’s tell Grunt to find all those files and squish them together into a single file named production.js, named that way to indicate it is for use on our real live website. concat: { dist: { src: [ 'js/libs/*.js', // All JS in the libs folder 'js/global.js' // This specific file ], dest: 'js/build/production.js', } } Note: throughout this article there will be little chunks of configuration code like above. The intention is to focus in on the important bits, but it can be confusing at first to see how a particular chunk fits into the larger file. If you ever get confused and need more context, refer to the complete file. With that concat configuration in place, head over to the terminal, run the command: grunt and watch it happen! production.js will be created and will be a perfect concatenation of our three files. This was a big aha! moment for me. Feel the power course through your veins. Let’s do more things! Let’s make Grunt minify that JavaScript We have so much prep work done now, adding new tasks for Grunt to run is relatively easy. We just need to: Find a Grunt plug-in to do what we want Learn the configuration style of that plug-in Write that configuration to work with our project The official plug-in for minifying code is grunt-contrib-uglify. Just like we did last time, we just run an NPM command to install it: npm install grunt-contrib-uglify --save-dev Then we alter our Gruntfile.js to load the plug-in: grunt.loadNpmTasks('grunt-contrib-uglify'); Then we configure it: uglify: { build: { src: 'js/build/production.js', dest: 'js/build/production.min.js' } } Let’s update that default task to also run minification: grunt.registerTask('default', ['concat', 'uglify']); Super-similar to the concatenation set-up, right? Run grunt at the terminal and you’ll get some deliciously minified JavaScript: Minified JavaScript That production.min.js file is what we would load up for use in our index.html file. Let’s make Grunt optimize our images We’ve got this down pat now. Let’s just go through the motions. The official image minification plug-in for Grunt is grunt-contrib-imagemin. Install it: npm install grunt-contrib-imagemin --save-dev Register it in the Gruntfile.js: grunt.loadNpmTasks('grunt-contrib-imagemin'); Configure it: imagemin: { dynamic: { files: [{ expand: true, cwd: 'images/', src: ['**/*.{png,jpg,gif}'], dest: 'images/build/' }] } } Make sure it runs: grunt.registerTask('default', ['concat', 'uglify', 'imagemin']); Run grunt and watch that gorgeous squishification happen: Squished images Gotta love performance increases for nearly zero effort. Let’s get a little bit smarter and automate What we’ve done so far is awesome and incredibly useful. But there are a couple of things we can get smarter on and make things easier on ourselves, as well as Grunt: Run these tasks automatically when they should Run only the tasks needed at the time For instance: Concatenate and minify JavaScript when JavaScript changes Optimize images when a new image is added or an existing one changes We can do this by watching files. We can tell Grunt to keep an eye out for changes to specific places and, when changes happen in those places, run specific tasks. Watching happens through the official grunt-contrib-watch plugin. I’ll let you install it. It is exactly the same process as the last few plug-ins we installed. We configure it by giving watch specific files (or folders, or both) to watch. By watch, I mean monitor for file changes, file deletions or file additions. Then we tell it what tasks we want to run when it detects a change. We want to run our concatenation and minification when anything in the /js/ folder changes. When it does, we should run the JavaScript-related tasks. And when things happen elsewhere, we should not run the JavaScript-related tasks, because that would be irrelevant. So: watch: { scripts: { files: ['js/*.js'], tasks: ['concat', 'uglify'], options: { spawn: false, }, } } Feels pretty comfortable at this point, hey? The only weird bit there is the spawn thing. And you know what? I don’t even really know what that does. From what I understand from the documentation it is the smart default. That’s real-world development. Just leave it alone if it’s working and if it’s not, learn more. Note: Isn’t it frustrating when something that looks so easy in a tutorial doesn’t seem to work for you? If you can’t get Grunt to run after making a change, it’s very likely to be a syntax error in your Gruntfile.js. That might look like this in the terminal: Errors running Grunt Usually Grunt is pretty good about letting you know what happened, so be sure to read the error message. In this case, a syntax error in the form of a missing comma foiled me. Adding the comma allowed it to run. Let’s make Grunt do our preprocessing The last thing on our list from the top of the article is using Sass — yet another task Grunt is well-suited to run for us. But wait? Isn’t Sass technically in Ruby? Indeed it is. There is a version of Sass that will run in Node and thus not add an additional dependency to our project, but it’s not quite up-to-snuff with the main Ruby project. So, we’ll use the official grunt-contrib-sass plug-in which just assumes you have Sass installed on your machine. If you don’t, follow the command line instructions. What’s neat about Sass is that it can do concatenation and minification all by itself. So for our little project we can just have it compile our main global.scss file: sass: { dist: { options: { style: 'compressed' }, files: { 'css/build/global.css': 'css/global.scss' } } } We wouldn’t want to manually run this task. We already have the watch plug-in installed, so let’s use it! Within the watch configuration, we’ll add another subtask: css: { files: ['css/*.scss'], tasks: ['sass'], options: { spawn: false, } } That’ll do it. Now, every time we change any of our Sass files, the CSS will automaticaly be updated. Let’s take this one step further (it’s absolutely worth it) and add LiveReload. With LiveReload, you won’t have to go back to your browser and refresh the page. Page refreshes happen automatically and in the case of CSS, new styles are injected without a page refresh (handy for heavily state-based websites). It’s very easy to set up, since the LiveReload ability is built into the watch plug-in. We just need to: Install the browser plug-in Add to the top of the watch configuration: . watch: { options: { livereload: true, }, scripts: { /* etc */ Restart the browser and click the LiveReload icon to activate it. Update some Sass and watch it change the page automatically. Live reloading browser Yum. Prefer a video? If you’re the type that likes to learn by watching, I’ve made a screencast to accompany this article that I’ve published over on CSS-Tricks: First Moments with Grunt Leveling up As you might imagine, there is a lot of leveling up you can do with your build process. It surely could be a full time job in some organizations. Some hardcore devops nerds might scoff at the simplistic setup we have going here. But I’d advise them to slow their roll. Even what we have done so far is tremendously valuable. And don’t forget this is all free and open source, which is amazing. You might level up by adding more useful tasks: Running your CSS through Autoprefixer (A+ Would recommend) instead of a preprocessor add-ons. Writing and running JavaScript unit tests (example: Jasmine). Build your image sprites and SVG icons automatically (example: Grunticon). Start a server, so you can link to assets with proper file paths and use services that require a real URL like TypeKit and such, as well as remove the need for other tools that do this, like MAMP. Check for code problems with HTML-Inspector, CSS Lint, or JS Hint. Have new CSS be automatically injected into the browser when it ever changes. Help you commit or push to a version control repository like GitHub. Add version numbers to your assets (cache busting). Help you deploy to a staging or production environment (example: DPLOY). You might level up by simply understanding more about Grunt itself: Read Grunt Boilerplate by Mark McDonnell. Read Grunt Tips and Tricks by Nicolas Bevacqua. Organize your Gruntfile.js by splitting it up into smaller files. Check out other people’s and projects’ Gruntfile.js. Learn more about Grunt by digging into its source and learning about its API. Let’s share I think some group sharing would be a nice way to wrap this up. If you are installing Grunt for the first time (or remember doing that), be especially mindful of little frustrating things you experience(d) but work(ed) through. Those are the things we should share in the comments here. That way we have this safe place and useful resource for working through those confusing moments without the embarrassment. We’re all in this thing together! 1 Maybe someday someone will make a beautiful Grunt app for your operating system of choice. But I’m not sure that day will come. The configuration of the plug-ins is the important part of using Grunt. Each plug-in is a bit different, depending on what it does. That means a uniquely considered UI for every single plug-in, which is a long shot. Perhaps a decent middleground is this Grunt DevTools Chrome add-on. 2 Gruntfile.js is often referred to as Gruntfile in documentation and examples. Don’t literally name it Gruntfile — it won’t work. 2013 Chris Coyier chriscoyier 2013-12-11T00:00:00+00:00 https://24ways.org/2013/grunt-is-not-weird-and-hard/ code
68 Grid, Flexbox, Box Alignment: Our New System for Layout Three years ago for 24 ways 2012, I wrote an article about a new CSS layout method I was excited about. A specification had emerged, developed by people from the Internet Explorer team, bringing us a proper grid system for the web. In 2015, that Internet Explorer implementation is still the only public implementation of CSS grid layout. However, in 2016 we should be seeing it in a new improved form ready for our use in browsers. Grid layout has developed hidden behind a flag in Blink, and in nightly builds of WebKit and, latterly, Firefox. By being developed in this way, breaking changes could be safely made to the specification as no one was relying on the experimental implementations in production work. Another new layout method has emerged over the past few years in a more public and perhaps more painful way. Shipped prefixed in browsers, The flexible box layout module (flexbox) was far too tempting for developers not to use on production sites. Therefore, as changes were made to the specification, we found ourselves with three different flexboxes, and browser implementations that did not match one another in completeness or in the version of specified features they supported. Owing to the different ways these modules have come into being, when I present on grid layout it is often the very first time someone has heard of the specification. A question I keep being asked is whether CSS grid layout and flexbox are competing layout systems, as though it might be possible to back the loser in a CSS layout competition. The reality, however, is that these two methods will sit together as one system for doing layout on the web, each method playing to certain strengths and serving particular layout tasks. If there is to be a loser in the battle of the layouts, my hope is that it will be the layout frameworks that tie our design to our markup. They have been a necessary placeholder while we waited for a true web layout system, but I believe that in a few years time we’ll be easily able to date a website to circa 2015 by seeing <div class="row"> or <div class="col-md-3"> in the markup. In this article, I’m going to take a look at the common features of our new layout systems, along with a couple of examples which serve to highlight the differences between them. To see the grid layout examples you will need to enable grid in your browser. The easiest thing to do is to enable the experimental web platform features flag in Chrome. Details of current browser support can be found here. Relationship Items only become flex or grid items if they are a direct child of the element that has display:flex, display:grid or display:inline-grid applied. Those direct children then understand themselves in the context of the complete layout. This makes many things possible. It’s the lack of relationship between elements that makes our existing layout methods difficult to use. If we float two columns, left and right, we have no way to tell the shorter column to extend to the height of the taller one. We have expended a lot of effort trying to figure out the best way to make full-height columns work, using techniques that were never really designed for page layout. At a very simple level, the relationship between elements means that we can easily achieve full-height columns. In flexbox: See the Pen Flexbox equal height columns by rachelandrew (@rachelandrew) on CodePen. And in grid layout (requires a CSS grid-supporting browser): See the Pen Grid equal height columns by rachelandrew (@rachelandrew) on CodePen. Alignment Full-height columns rely on our flex and grid items understanding themselves as part of an overall layout. They also draw on a third new specification: the box alignment module. If vertical centring is a gift you’d like to have under your tree this Christmas, then this is the box you’ll want to unwrap first. The box alignment module takes the alignment and space distribution properties from flexbox and applies them to other layout methods. That includes grid layout, but also other layout methods. Once implemented in browsers, this specification will give us true vertical centring of all the things. Our examples above achieved full-height columns because the default value of align-items is stretch. The value ensured our columns stretched to the height of the tallest. If we want to use our new vertical centring abilities on all items, we would set align-items:center on the container. To align one flex or grid item, apply the align-self property. The examples below demonstrate these alignment properties in both grid layout and flexbox. The portrait image of Widget the cat is aligned with the default stretch. The other three images are aligned using different values of align-self. Take a look at an example in flexbox: See the Pen Flexbox alignment by rachelandrew (@rachelandrew) on CodePen. And also in grid layout (requires a CSS grid-supporting browser): See the Pen Grid alignment by rachelandrew (@rachelandrew) on CodePen. The alignment properties used with CSS grid layout. Fluid grids A cornerstone of responsive design is the concept of fluid grids. “[…]every aspect of the grid—and the elements laid upon it—can be expressed as a proportion relative to its container.” —Ethan Marcotte, “Fluid Grids” The method outlined by Marcotte is to divide the target width by the context, then use that value as a percentage value for the width property on our element. h1 { margin-left: 14.575%; /* 144px / 988px = 0.14575 */ width: 70.85%; /* 700px / 988px = 0.7085 */ } In more recent years, we’ve been able to use calc() to simplify this (at least, for those of us able to drop support for Internet Explorer 8). However, flexbox and grid layout make fluid grids simple. The most basic of flexbox demos shows this fluidity in action. The justify-content property – another property defined in the box alignment module – can be used to create an equal amount of space between or around items. As the available width increases, more space is assigned in proportion. In this demo, the list items are flex items due to display:flex being added to the ul. I have given them a maximum width of 250 pixels. Any remaining space is distributed equally between the items as the justify-content property has a value of space-between. See the Pen Flexbox: justify-content by rachelandrew (@rachelandrew) on CodePen. For true fluid grid-like behaviour, your new flexible friends are flex-grow and flex-shrink. These properties give us the ability to assign space in proportion. The flexbox flex property is a shorthand for: flex-grow flex-shrink flex-basis The flex-basis property sets the default width for an item. If flex-grow is set to 0, then the item will not grow larger than the flex-basis value; if flex-shrink is 0, the item will not shrink smaller than the flex-basis value. flex: 1 1 200px: a flexible box that can grow and shrink from a 200px basis. flex: 0 0 200px: a box that will be 200px and cannot grow or shrink. flex: 1 0 200px: a box that can grow bigger than 200px, but not shrink smaller. In this example, I have a set of boxes that can all grow and shrink equally from a 100 pixel basis. See the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen. What I would like to happen is for the first element, containing a portrait image, to take up less width than the landscape images, thus keeping it more in proportion. I can do this by changing the flex-grow value. By giving all the items a value of 1, they all gain an equal amount of the available space after the 100 pixel basis has been worked out. If I give them all a value of 3 and the first box a value of 1, the other boxes will be assigned three parts of the available space while box 1 is assigned only one part. You can see what happens in this demo: See the Pen Flexbox: flex-grow by rachelandrew (@rachelandrew) on CodePen. Once you understand flex-grow, you should easily be able to grasp how the new fraction unit (fr, defined in the CSS grid layout specification) works. Like flex-grow, this unit allows us to assign available space in proportion. In this case, we assign the space when defining our track sizes. In this demo (which requires a CSS grid-supporting browser), I create a four-column grid using the fraction unit to define my track sizes. The first track is 1fr in width, and the others 2fr. grid-template-columns: 1fr 2fr 2fr 2fr; See the Pen Grid fraction units by rachelandrew (@rachelandrew) on CodePen. The four-track grid. Separation of concerns My younger self petitioned my peers to stop using tables for layout and to move to CSS. One of the rallying cries of that movement was the concept of separating our source and content from how they were displayed. It was something of a failed promise given the tools we had available: the display leaked into the markup with the need for redundant elements to cope with browser bugs, or visual techniques that just could not be achieved without supporting markup. Browsers have improved, but even now we can find ourselves compromising the ideal document structure so we can get the layout we want at various breakpoints. In some ways, the situation has returned to tables-for-layout days. Many of the current grid frameworks rely on describing our layout directly in the markup. We add divs for rows, and classes to describe the number of desired columns. We nest these constructions of divs inside one another. Here is a snippet from the Bootstrap grid examples – two columns with two nested columns: <div class="row"> <div class="col-md-8"> .col-md-8 <div class="row"> <div class="col-md-6"> .col-md-6 </div> <div class="col-md-6"> .col-md-6 </div> </div> </div> <div class="col-md-4"> .col-md-4 </div> </div> Not a million miles away from something I might have written in 1999. <table> <tr> <td class="col-md-8"> .col-md-8 <table> <tr> <td class="col-md-6"> .col-md-6 </td> <td class="col-md-6"> .col-md-6 </td> </tr> </table> </td> <td class="col-md-4"> .col-md-4 </td> </tr> </table> Grid and flexbox layouts do not need to be described in markup. The layout description happens entirely in the CSS, meaning that elements can be moved around from within the presentation layer. Flexbox gives us the ability to reverse the flow of elements, but also to set the order of elements with the order property. This is demonstrated here, where Widget the cat is in position 1 in the source, but I have used the order property to display him after the things that are currently unimpressive to him. See the Pen Flexbox: order by rachelandrew (@rachelandrew) on CodePen. Grid layout takes this a step further. Where flexbox lets us set the order of items in a single dimension, grid layout gives us the ability to position things in two dimensions: both rows and columns. Defined in the CSS, this positioning can be changed at any breakpoint without needing additional markup. Compare the source order with the display order in this example (requires a CSS grid-supporting browser): See the Pen Grid positioning in two dimensions by rachelandrew (@rachelandrew) on CodePen. Laying out our items in two dimensions using grid layout. As these demos show, a straightforward way to decide if you should use grid layout or flexbox is whether you want to position items in one dimension or two. If two, you want grid layout. A note on accessibility and reordering The issues arising from this powerful ability to change the way items are ordered visually from how they appear in the source have been the subject of much discussion. The current flexbox editor’s draft states “Authors must use order only for visual, not logical, reordering of content. Style sheets that use order to perform logical reordering are non-conforming.” —CSS Flexible Box Layout Module Level 1, Editor’s Draft (3 December 2015) This is to ensure that non-visual user agents (a screen reader, for example) can rely on the document source order as being correct. Take care when reordering that you do so from the basis of a sound document that makes sense in terms of source order. Avoid using visual order to convey meaning. Automatic content placement with rules Having control over the order of items, or placing items on a predefined grid, is nice. However, we can often do that already with one method or another and we have frameworks and tools to help us. Tools such as Susy mean we can even get away from stuffing our markup full of grid classes. However, our new layout methods give us some interesting new possibilities. Something that is useful to be able to do when dealing with content coming out of a CMS or being pulled from some other source, is to define a bunch of rules and then say, “Display this content, using these rules.” As an example of this, I will leave you with a Christmas poem displayed in a document alongside Widget the cat and some of the decorations that are bringing him no Christmas cheer whatsoever. The poem is displayed first in the source as a set of paragraphs. I’ve added a class identifying each of the four paragraphs but they are displayed in the source as one text. Below that are all my images, some landscape and some portrait; I’ve added a class of landscape to the landscape ones. The mobile-first grid is a single column and I use line-based placement to explicitly position my poem paragraphs. The grid layout auto-placement rules then take over and place the images into the empty cells left in the grid. At wider screen widths, I declare a four-track grid, and position my poem around the grid, keeping it in a readable order. I also add rules to my landscape class, stating that these items should span two tracks. Once again the grid layout auto-placement rules position the rest of my images without my needing to position them. You will see that grid layout takes items out of source order to fill gaps in the grid. It does this because I have set the property grid-auto-flow to dense. The default is sparse meaning that grid will not attempt this backfilling behaviour. Take a look and play around with the full demo (requires a CSS grid layout-supporting browser): See the Pen Grid auto-flow with rules by rachelandrew (@rachelandrew) on CodePen. The final automatic placement example. My wish for 2016 I really hope that in 2016, we will see CSS grid layout finally emerge from behind browser flags, so that we can start to use these features in production — that we can start to move away from using the wrong tools for the job. However, I also hope that we’ll see developers fully embracing these tools as the new system that they are. I want to see people exploring the possibilities they give us, rather than trying to get them to behave like the grid systems of 2015. As you discover these new modules, treat them as the new paradigm that they are, get creative with them. And, as you find the edges of possibility with them, take that feedback to the CSS Working Group. Help improve the layout systems that will shape the look of the future web. Some further reading I maintain a site of grid layout examples and resources at Grid by Example. The three CSS specifications I’ve discussed can be found as editor’s drafts: CSS grid, flexbox, box alignment. I wrote about the last three years of my interest in CSS grid layout, which gives something of a history of the specification. More examples of box alignment and grid layout. My presentation at Fronteers earlier this year, in which I explain more about these concepts. 2015 Rachel Andrew rachelandrew 2015-12-15T00:00:00+00:00 https://24ways.org/2015/grid-flexbox-box-alignment-our-new-system-for-layout/ code
133 Gravity-Defying Page Corners While working on Stikkit, a “page curl” came to be. Not being as crafty as Veerle, you see. I fired up Photoshop to see what could be. “Another copy is running on the network“ … oopsie. With license issues sorted out and a concept in mind I set out to create something flexible and refined. One background image and code that is sure to be lean. A simple solution for lazy people like me. The curl I’ll be showing isn’t a curl at all. It’s simply a gradient that’s 18 pixels tall. With a fade to the left that’s diagonally aligned and a small fade on the right that keeps the illusion defined. Create a selection with the marquee tool (keeping in mind a reasonable minimum width) and drag a gradient (black to transparent) from top to bottom. Now drag a gradient (the background color of the page to transparent) from the bottom left corner to the top right corner. Finally, drag another gradient from the right edge towards the center, about 20 pixels or so. But the top is flat and can be positioned precisely just under the bottom right edge very nicely. And there it will sit, never ever to be busted by varying sizes of text when adjusted. <div id="page"> <div id="page-contents"> <h2>Gravity-Defying!</h2> <p>Lorem ipsum dolor ...</p> </div> </div> Let’s dive into code and in the markup you’ll see “is that an extra div?” … please don’t kill me? The #page div sets the width and bottom padding whose height is equal to the shadow we’re adding. The #page-contents div will set padding in ems to scale with the text size the user intends. The background color will be added here too but not overlapping the shadow where #page’s padding makes room. A simple technique that you may find amusing is to substitute a PNG for the GIF I was using. For that would be crafty and future-proof, too. The page curl could sit on any background hue. I hope you’ve enjoyed this easy little trick. It’s hardly earth-shattering, and arguably slick. But it could come in handy, you just never know. Happy Holidays! And pleasant dreams of web three point oh. 2006 Dan Cederholm dancederholm 2006-12-24T00:00:00+00:00 https://24ways.org/2006/gravity-defying-page-corners/ design
225 Good Ideas Grow on Paper Great designers have one thing in common: their design process is centred on ideas; ideas that are more often than not developed on paper. Though it’s often tempting to take the path of least resistance, turning to the computer in the headlong rush to complete a project (often in the face of formidable client pressure), resist the urge and – for a truly great idea – start first on paper. The path of least resistance is often characterised by cliché and overused techniques – one per cent noise, border-radius, text-shadow – the usual suspects – techniques that are ten-a-penny at the gallery sites. Whilst all are useful, and technique and craft are important, great design isn’t about technique alone – it’s about technique in the service of good ideas. But how do we generate those ideas? Inspiration can certainly come to you out of the blue. When working as a designer in a role which often consists of incubating good ideas, however, idly waiting for the time-honoured lightbulb to appear above your head just isn’t good enough. We need to establish an environment where we tip the odds of getting good ideas in our favour. So, when faced with the blank canvas, what do we do to unlock the proverbial tidal wave of creativity? Fear not. We’re about to share with you a couple of stalwart techniques that will stand you in good stead when you need that good idea, in the face of the pressure of yet another looming deadline. Get the process right Where do ideas come from? In many cases they come from anywhere but the screen. Hence, our first commandment is to close the lid of your computer and, for a change, work on paper. It might seem strange, it might also seem like a distraction, but – trust us – the time invested here will more than pay off. Idea generation should be a process of rapid iteration, sketching and thinking aloud, all processes best undertaken in more fast paced, analogue media. Our tool of choice is the Sharpie and Flip Chart Combo©, intentionally low resolution to encourage lo-fi idea generation. In short, your tools should be designed not to be precious, but to quickly process your thoughts. Ideas can be expressed with a thick line marker or by drawing with a stick in the sand; it’s the ideas that matter, not the medium. Input > Synthesise > Output Ideas don’t materialise in a vacuum. Without constant input, the outputs will inevitably remain the same. As such, it’s essential to maintain an inquisitive mind, ensuring a steady flow of new triggers and stimuli that enable your thinking to evolve. What every designer brings to the table is their prior experience and unique knowledge. It should come as no surprise to discover that a tried and tested method of increasing that knowledge is, believe it or not, to read – often and widely. The best and most nuanced ideas come after many years of priming the brain with an array of diverse material, a point made recently in Jessica Hische’s aptly named Why You Should Know Your Shit. One of the best ways of synthesising the knowledge you accumulate is to write. The act of writing facilitates your thinking and stores the pieces of the jigsaw you’ll one day return to. You don’t have to write a book or a well-articulated article; a scribbled note in the margin will suffice in facilitating the process of digestion. As with writing, we implore you to make sketching an essential part of your digestion process. More immediate than writing, sketching has the power to put yet unformed ideas down on paper, giving you an insight into the fantastic conceptions you’re more often than not still incubating. Our second commandment is a practical one: always carry a sketchbook and a pen. Although it seems that the very best ideas are scribbled on the back of a beer mat or a wine-stained napkin, always carrying your ‘thinking utensils’ should be as natural as not leaving the house without your phone, wallet, keys or pants. Further, the more you use your sketchbook, the less precious you’ll find yourself becoming. Sketching isn’t about being an excellent draughtsman, it’s about synthesising and processing your thoughts and ideas, as Jason Santa Maria summarises nicely in his article Pretty Sketchy: Sketchbooks are not about being a good artist, they’re about being a good thinker. Jason Santa Maria The sketchbook and pen should become your trusted tools in your task to constantly survey the world around you. As Paul Smith says, You Can Find Inspiration in Anything; close the lid, look beyond the computer; there’s a whole world of inspiration out there. Learn to love old dusty buildings So, how do you learn? How do you push beyond the predictable world pre-filtered by Mr Google? The answer lies in establishing a habit of exploring the wonderful worlds of museums and libraries, dusty old buildings that repay repeated visits. Once the primary repositories of thought and endless sources of inspiration, these institutions are now often passed over for the quick fix of a Google search or Wikipedia by you, the designer, chained to a desk and manacled to a MacBook. Whilst others might frown, we urge you to get away from your desk and take an eye-opening stroll through the knowledge-filled corridors of yore (and don’t forget to bring your sketchbook). Here you’ll find ideas aplenty, ideas that will set you apart from your peers, who remain ever-reliant on the same old digital sources. The idea generation toolbox Now that we’ve established the importance of getting the process and the context right, it’s time to meet the idea generation toolbox: a series of tools and techniques that can be applied singularly or in combination to solve the perennial problem of the blank canvas. The clean sheet of paper, numbing in its emptiness, can prove an insurmountable barrier to many a project, but the route beyond it involves just a few, well-considered steps. The route to a good idea lies in widening your pool of inspiration at the project outset. Let go and generate ideas quickly; it’s critical to diverge before you converge – but how do we do this and what exactly do we mean by this? The temptation is to pull something out of your well-worn box of tricks, something that you know from experience will do the job. We urge you, however, not to fall prey to this desire. You can do better; better still, a few of you putting your minds together can do a lot better. By avoiding the path of least resistance, you can create something extraordinary. Culturally, we value logical, linear thinking. Since the days of Plato and Aristotle, critical thinking, deduction and the pursuit of truth have been rewarded. To generate creative ideas, however, we need to start thinking sideways, making connections that don’t necessarily follow logically. Lateral thinking, a phrase coined by Edward de Bono in 1967, aptly describes this very process: With logic you start out with certain ingredients, just as in playing chess you start out with given pieces – lateral thinking is concerned not with playing with the existing pieces but with seeking to change those very pieces. Edward de Bono One of the easiest ways to start thinking laterally is to start with a mind map, a perfect tool for widening the scope of a project beyond the predictable and an ideal one for getting the context right for discovery. Making connections Mind maps can be used to generate, visualise and structure ideas. Arranged intuitively and classified around groupings, mind maps allow chance connections to be drawn across related groups of information, and are perfect for exposing alogical associations and unexpected relationships. Get a number of people together in a room, equipped with the Sharpie and Flip Chart Combo©. Give yourself a limited amount of time – half an hour should prove more than enough – and you’ll be surprised at the results a few well-chosen people can generate in a very short space of time. The key is to work fast, diverge and not inhibit thinking. We’ve been embracing Tony Buzan’s methods in our teaching for over a decade. His ideas on the power of radiant thinking and how this can be applied to mind maps, uncover the real power which lies in the human brain’s ability to spot connections across a mapped out body of diverse knowledge. Frank Chimero wrote about this recently in How to Have an Idea, which beautifully illustrates Mr Buzan’s theories, articulating the importance of the brain’s ability to make abstract connections, finding unexpected pairings when a concept is mapped out on paper. Once a topic is surveyed and a rich set of stimuli articulated, the next stage is to draw connections, pulling from opposite sides of the mind map. It’s at this point, when defining alogical connections, that the truly interesting and unexpected ideas are often uncovered. The curve ball If you’ve followed our instructions so far, all being well, you should have a number of ideas. Good news: we have one last technique to throw into the mix. We like to call it ‘the curve ball’, that last minute ‘something’ that forces you to rethink and encourages you to address a problem from a different direction. There are a number of ways of throwing in a curve ball – a short, sharp, unexpected impetus – but we have a firm favourite we think you’ll appreciate. Brian Eno and Peter Schmidt’s Oblique Strategies – subtitled ‘Over One Hundred Worthwhile Dilemmas’ – are the perfect creative tool for throwing in a spot of unpredictability. As Eno and Schmidt put it: The Oblique Strategies can be used as a pack (a set of possibilities being continuously reviewed in the mind) or by drawing a single card from the shuffled pack when a dilemma occurs in a working situation. In this case the card is trusted even if its appropriateness is quite unclear. They are not final, as new ideas will present themselves, and others will become self-evident. Brian Eno and Peter Schmidt Simply pick a card and apply the strategy to the problem at hand. The key here, as with de Bono’s techniques, is to embrace randomness and provocation to inspire lateral creative approaches. To assist this process, you might wish to consult one of the many virtual decks of Oblique Strategies online. Wrapping up To summarise, it’s tempting to see the route to the fastest satisfactory conclusion in a computer when, in reality, that’s the last place you should start. The tools we’ve introduced, far from time-consuming, are hyper-efficient, always at hand and, if you factor them into your workflow, the key to unlocking the ideas that set the great designers apart. We wish you well on your quest in search of the perfect idea, now armed with the knowledge that the quest begins on paper. 2010 The Standardistas thestandardistas 2010-12-13T00:00:00+00:00 https://24ways.org/2010/good-ideas-grow-on-paper/ process
222 Golden Spirals As building blocks go, the rectangle is not one to overwhelm the designer with decisions. On the face of it, you have two options: you can set the width, and the height. But despite this apparent simplicity, there are combinations of width and height that can look unbalanced. If a rectangle is too tall and slim, it might appear precarious. If it is not tall enough, it may simply look flat. But like a guitar string that’s out of tune, you can tweak the proportions little by little until a rectangle feels, as Goldilocks said, just right. A golden rectangle has its height and width in the golden ratio, which is approximately 1:1.618. These proportions have long been recognised as being aesthetically harmonious. Whether through instruction or by intuition, artists have understood how to exploit these proportions over the centuries. Examples can be found in classical architecture, medieval book construction, and even in the recent #newtwitter redesign. A mathematical curiosity The golden rectangle is unique, in that if you remove a square section from it, what is left behind is itself a golden rectangle. The removal of a square can be repeated on the rectangle that is left behind, and then repeated again, as many times as you like. This means that the golden rectangle can be treated as a building block for recursive patterns. In this article, we will exploit this property to build a golden spiral, using only HTML and CSS. The markup The HTML we’ll use for this study is simply a series of nested <div>s. <body> <div id="container"> <div class="cycle"> <div> <div> <div> <div class="cycle"> <div> <div> <div> <div class="cycle"> <div> <div> <div> <div class="cycle"></div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </body> The first of these has the class cycle, and so does every fourth ancestor thereafter. The spiral completes a cycle every four steps, so this class allows styles to be reused on <div>s that appear at the same position in each cycle. Golden proportions To create our spiral we are going to exploit the unique properties of the golden rectangle, so our first priority is to ensure that we have a golden rectangle to begin with. If we pick a length for the short edge – say, 288 pixels – we can then calculate the length of the long edge by multiplying this value by 1.618. In this case, 288 × 1.618 = 466, so our starting point will be a <div> with these properties: #container > div { width: 466px; height: 288px; } The greater than symbol is used here to single out the immediate child of the #container element, without affecting the grandchild or any of the more distant descendants. We could go on to specify the precise pixel dimensions of every child element, but that means doing a lot of sums. It would be much easier if we just specified the dimensions for each element as a percentage of the width and height of its parent. This also has the advantage that if you change the size of the outermost container, all nested elements would be resized automatically – something that we shall exploit later. The approximate value of 38.2% can be derived from (100 × 1 − phi) ÷ phi, where the Greek letter phi (ϕ) stands for the golden ratio. The value of phi can be expressed as phi = (1 + √5 ) ÷ 2, which is approximately 1.618. You don’t have to understand the derivation to use it. Just remember that if you start with a golden rectangle, you can slice 38.2% from it to create a new golden rectangle. This can be expressed in CSS quite simply: .cycle, .cycle > div > div { height: 38.2%; width: 100%; } .cycle > div, .cycle > div > div > div { width: 38.2%; height: 100%; } You can see the result so far by visiting Demo One. With no borders or shading, there is nothing to see yet, so let’s address that next. Shading with transparency We’ll need to apply some shading to distinguish each segment of the spiral from its neighbours. We could start with a white background, then progress through shades of grey: #eee, #ddd, #ccc and so on, but this means hard-coding the background-color for every element. A more elegant solution would be to use the same colour for every element, but to make each one slightly transparent. The nested <div>s that we are working with could be compared to layers in Photoshop. By applying a semi-transparent shade of grey, each successive layer can build on top of the darker layers beneath it. The effect accumulates, causing each successive layer to appear slightly darker than the last. In his 2009 article for 24 ways, Drew McLellan showed how to create a semi-transparent effect by working with RGBA colour. Here, we’ll use the colour black with an alpha value of 0.07. #container div { background-color: rgba(0,0,0,0.07) } Note that I haven’t used the immediate child selector here, which means that this rule will apply to all <div> elements inside the #container, no matter how deeply nested they are. You can view the result in Demo Two. As you can see, the golden rectangles alternate between landscape and portrait orientation. Demo Three). CSS3 specification indicates that a percentage can be used to set the border-radius property, but using percentages does not achieve consistent results in browsers today. Luckily, if you specify a border-radius in pixels using a value that is greater than the width and height of the element, then the resulting curve will use the shorter length side as its radius. This produces exactly the effect that we want, so we’ll use an arbitrarily high value of 10,000 pixels for each border-radius: .cycle { border-radius: 0px; border-bottom-left-radius: 10000px; } .cycle > div { border-radius: 0px; border-bottom-right-radius: 10000px; } .cycle > div > div { border-radius: 0px; border-top-right-radius: 10000px; } .cycle > div > div > div { border-radius: 0px; border-top-left-radius: 10000px; } Note that the specification for the border-radius property is still in flux, so it is advisable to use vendor-specific prefixes. I have omitted them from the example above for the sake of clarity, but if you view source on Demo Four then you’ll see that the actual styles are not quite as brief. Filling the available space We have created an approximation of the Golden Spiral using only HTML and CSS. Neat! It’s a shame that it occupies just a fraction of the available space. As a finishing touch, let’s make the golden spiral expand or contract to use the full space available to it. Ideally, the outermost container should use the full available width or height that could accomodate a rectangle of golden proportions. This behaviour is available for background images using the “ background-size: contain; property, but I know of no way to make block level HTML elements behave in this fashion (if I’m missing something, please enlighten me). Where CSS fails to deliver, JavaScript can usually provide a workaround. This snippet requires jQuery: $(document).ready(function() { var phi = (1 + Math.sqrt(5))/2; $(window).resize(function() { var goldenWidth = windowWidth = $(this).width(), goldenHeight = windowHeight = $(this).height(); if (windowWidth/windowHeight > phi) { // panoramic viewport – use full height goldenWidth = windowHeight * phi; } else { // portrait viewport – use full width goldenHeight = windowWidth / phi; }; $("#container > div.cycle") .width(goldenWidth) .height(goldenHeight); }).resize(); }); You can view the result by visiting Demo Five. Is it just me, or can you see an elephant in there? You can probably think of many ways to enhance this further, but for this study we’ll leave it there. It has been a good excuse to play with proportions, positioning and the immediate child selector, as well as new CSS3 features such as border-radius and RGBA colours. If you are not already designing with golden proportions, then perhaps this will inspire you to begin. 2010 Drew Neil drewneil 2010-12-07T00:00:00+00:00 https://24ways.org/2010/golden-spirals/ design
180 Going Nuts with CSS Transitions I’m going to show you how CSS 3 transforms and WebKit transitions can add zing to the way you present images on your site. Laying the foundations First we are going to make our images look like mini polaroids with captions. Here’s the markup: <div class="polaroid pull-right"> <img src="../img/seal.jpg" alt=""> <p class="caption">Found this little cutie on a walk in New Zealand!</p> </div> You’ll notice we’re using a somewhat presentational class of pull-right here. This means the logic is kept separate from the code that applies the polaroid effect. The polaroid class has no positioning, which allows it to be used generically anywhere that the effect is required. The pull classes set a float and add appropriate margins—they can be used for things like blockquotes as well. .polaroid { width: 150px; padding: 10px 10px 20px 10px; border: 1px solid #BFBFBF; background-color: white; -webkit-box-shadow: 2px 2px 3px rgba(135, 139, 144, 0.4); -moz-box-shadow: 2px 2px 3px rgba(135, 139, 144, 0.4); box-shadow: 2px 2px 3px rgba(135, 139, 144, 0.4); } The actual polaroid effect itself is simply applied using padding, a border and a background colour. We also apply a nice subtle box shadow, using a property that is supported by modern WebKit browsers and Firefox 3.5+. We include the box-shadow property last to ensure that future browsers that support the eventual CSS3 specified version natively will use that implementation over the legacy browser specific version. The box-shadow property takes four values: three lengths and a colour. The first is the horizontal offset of the shadow—positive values place the shadow on the right, while negative values place it to the left. The second is the vertical offset, positive meaning below. If both of these are set to 0, the shadow is positioned equally on all four sides. The last length value sets the blur radius—the larger the number, the blurrier the shadow (therefore the darker you need to make the colour to have an effect). The colour value can be given in any format recognised by CSS. Here, we’re using rgba as explained by Drew behind the first door of this year’s calendar. Rotation For browsers that understand it (currently our old favourites WebKit and FF3.5+) we can add some visual flair by rotating the image, using the transform CSS 3 property. -webkit-transform: rotate(9deg); -moz-transform: rotate(9deg); transform: rotate(9deg); Rotations can be specified in degrees, radians (rads) or grads. WebKit also supports turns unfortunately Firefox doesn’t just yet. For our example, we want any polaroid images on the left hand side to be rotated in the opposite direction, using a negative degree value: .pull-left.polaroid { -webkit-transform: rotate(-9deg); -moz-transform: rotate(-9deg); transform: rotate(-9deg); } Multiple class selectors don’t work in IE6 but as luck would have it, the transform property doesn’t work in any current IE version either. The above code is a good example of progressive enrichment: browsers that don’t support box-shadow or transform will still see the image and basic polaroid effect. Animation WebKit is unique amongst browser rendering engines in that it allows animation to be specified in pure CSS. Although this may never actually make it in to the CSS 3 specification, it degrades nicely and more importantly is an awful lot of fun! Let’s go nuts. In the next demo, the image is contained within a link and mousing over that link causes the polaroid to animate from being angled to being straight. Here’s our new markup: <a href="http://www.flickr.com/photos/nataliedowne/2340993237/" class="polaroid"> <img src="../img/raft.jpg" alt=""> White water rafting in Queenstown </a> And here are the relevant lines of CSS: a.polaroid { /* ... */ -webkit-transform: rotate(10deg); -webkit-transition: -webkit-transform 0.5s ease-in; } a.polaroid:hover, a.polaroid:focus, a.polaroid:active { /* ... */ -webkit-transform: rotate(0deg); } The @-webkit-transition@ property is the magic wand that sets up the animation. It takes three values: the property to be animated, the duration of the animation and a ‘timing function’ (which affects the animation’s acceleration, for a smoother effect). -webkit-transition only takes affect when the specified property changes. In pure CSS, this is done using dynamic pseudo-classes. You can also change the properties using JavaScript, but that’s a story for another time. Throwing polaroids at a table Imagine there are lots of differently sized polaroid photos scattered on a table. That’s the effect we are aiming for with our next demo. As an aside: we are using absolute positioning to arrange the images inside a flexible width container (with a minimum and maximum width specified in pixels). As some are positioned from the left and some from the right when you resize the browser they shuffle underneath each other. This is an effect used on the UX London site. This demo uses a darker colour shadow with more transparency than before. The grey shadow in the previous example worked fine, but it was against a solid background. Since the images are now overlapping each other, the more opaque shadow looked fake. -webkit-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); -moz-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); On hover, as well as our previous trick of animating the image rotation back to straight, we are also making the shadow darker and setting the z-index to be higher than the other images so that it appears on top. And Finally… Finally, for a bit more fun, we’re going to simulate the images coming towards you and lifting off the page. We’ll achieve this by making them grow larger and by offsetting the shadow & making it longer. Screenshot 1 shows the default state, while 2 shows our previous hover effect. Screenshot 3 is the effect we are aiming for, illustrated by demo 4. a.polaroid { /* ... */ z-index: 2; -webkit-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); -moz-box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); box-shadow: 2px 2px 4px rgba(0,0, 0, 0.3); -webkit-transform: rotate(10deg); -moz-transform: rotate(10deg); transform: rotate(10deg); -webkit-transition: all 0.5s ease-in; } a.polaroid:hover, a.polaroid:focus, a.polaroid:active { z-index: 999; border-color: #6A6A6A; -webkit-box-shadow: 15px 15px 20px rgba(0,0, 0, 0.4); -moz-box-shadow: 15px 15px 20px rgba(0,0, 0, 0.4); box-shadow: 15px 15px 20px rgba(0,0, 0, 0.4); -webkit-transform: rotate(0deg) scale(1.05); -moz-transform: rotate(0deg) scale(1.05); transform: rotate(0deg) scale(1.05); } You’ll notice we are now giving the transform property another transform function: scale, which takes increases the size by the specified factor. Other things you can do with transform include skewing, translating or you can go mad creating your own transforms with a matrix. The box-shadow has both its offset and blur radius increased dramatically, and is darkened using the alpha channel of the rgba colour. And because we want the effects to all animate smoothly, we pass a value of all to the -webkit-transition property, ensuring that any changed property on that link will be animated. Demo 5 is the finished example, bringing everything nicely together. CSS transitions and transforms are a great example of progressive enrichment, which means improving the experience for a portion of the audience without negatively affecting other users. They are also a lot of fun to play with! Further reading -moz-transform – the mozilla developer center has a comprehensive explanation of transform that also applies to -webkit-transform and transform. CSS: Animation Using CSS Transforms – this is a good, more indepth tutorial on animations. CSS Animation – the Safari blog explains the usage of -webkit-transform. Dinky pocketbooks with transform – another use for transforms, create your own printable pocketbook. A while back, Simon wrote a little bookmarklet to spin the entire page… warning: this will spin the entire page. 2009 Natalie Downe nataliedowne 2009-12-14T00:00:00+00:00 https://24ways.org/2009/going-nuts-with-css-transitions/ code
278 Going Both Ways It’s that time of the year again: Santa is getting ready to travel the world. Up until now, girls and boys from all over have sent in letters asking for what they want. I hope that Santa and his elves have—unlike me—learned more than just English. On the Internet, those girls and boys want to participate in sharing their stories and videos of opening presents and of being with friends and family. Ah, yes, the wonders of user generated content. But more than that, people also want to be able to use sites in the language they know. While you and I might expect the text to read from left to right, not all languages do. Some go from right to left, such as Arabic and Hebrew. (Some also go from top to bottom, but for now, let’s just worry about those first two directions!) If we were building a site for girls and boys to send their letters to Santa, we need to consider having the interface in the language and direction that they prefer. On the elves’ side, they may be viewing the site in one direction but reading the user generated content in the other direction. We need to build a site that supports bidirectional (or bidi) text. Let’s take a look at some things to be aware of when it comes to building bidi interfaces. Setting the direction of the interface Right off the bat, we need to tell the browser what direction the text should be going in. To do this, we add the dir attribute to an HTML element and set it to either LTR (for left to right) or RTL (for right to left). <body dir="rtl"> You can add the dir attribute to any element and it will set or change the direction for the content within that element. <body dir="ltr"> Here is English Content. <div dir="rtl">الموضوع</div> </body> You can also set the direction via CSS. .rtl { direction: rtl; } It’s generally recommended that you don’t use CSS to set the direction of the text. Text direction is an important part of the content that should be retained even in environments where the CSS may not be available or fails to load. How things change with the direction attribute Just adding the dir attribute tells the browser to render the content within it differently. The text aligns to the right of the page and, interestingly, punctuation appears at the left of the sentence. (We’ll get to that in a little bit.) Scrollbars in most browsers will appear on the left instead of the right. Webkit is the notable exception here which always shows the scrollbar on the right, no matter what the text direction is. Avoid having a design that has an expectation that the scrollbar will be in a specific place (and a specific size). Changing the order of text mid-way As we saw in that previous example, the punctuation appeared at the beginning of the sentence instead of the end, even though the text was English. At Yahoo!, we have an interesting dilemma where the company name has punctuation in it. Therefore, when the name appears in the middle of (for example) Arabic text, the exclamation mark appears at the beginning of the word instead of the end. There are two ways in which this problem can be solved: 1. Use HTML around the left-to-right content, or To solve the problem of the Yahoo! name in the midst of Arabic text, we can wrap a span around it and change the direction on that element. 2. Use a text direction mark in the content. Unicode has two marks, U+200E and U+200F, that tell the browser that the text is in a particular direction. Placing this right after the punctuation will correct the placement. Using the HTML entity: Yahoo!‎ Tables Thankfully, the cells of a data table also get reordered from right to left. Equally as nice, if you’re using display:table, the content will still get reordered. CSS So far, we’ve seen that the dir attribute does a pretty decent job of getting content flowing in the direction that we need it. Unfortunately, there are huge swaths of design that is handled by CSS that the handy dir attribute has zero effect over. Many properties, like float or absolute positioning with left and right values, are unaffected and must be handled manually. Elements that were floated left must now by floated right. Left margins and paddings must now move to the right and the right margins and paddings must now move to the left. Since the browser won’t handle this for us, we have a couple approaches that we can use: CSS Only We can take advantage of the attribute selector to target CSS to apply in one direction or another. [dir=ltr] .module { float: left; margin: 0 0 0 20px; } [dir=rtl] .module { float: right; margin: 0 20px 0 0; } As you can see from this example, both of the properties have been modified for the flipped interface. If your interface is rather complicated, you will have to create a lot of duplicate rules to have the site looking good in both directions while serving up a single stylesheet. CSSJanus Google has a tool called CSSJanus. It’s a Python script that runs over the LTR versions of your CSS files and generates RTL versions. For the RTL version of the site, just serve up those CSS files instead of the LTR versions. The script looks for keywords and value combinations and automatically swaps them so you don’t have to. At Yahoo!, CSSJanus was a huge help in speeding up our development of a bidi interface. We’ve also made a number of improvements to the script to better handle border radius, background positioning, and gradients. We will be pushing those changes back into the CSSJanus project. Background Images Background images, especially for things like CSS sprites, also raise an interesting dilemma. Background images are positioned relative to the left of the element. In a flipped interface, however, we need to position it relative to the right. An icon that would be to the left of some text will now need to appear on the right. If the x position of the background is percentage-based, then it’s fairly easy to swap the values. 0 becomes 100%, 10% becomes 90% and so on. If the x position is pixel-based, then we’re in a bit of a pickle. There’s no way to say that the image should be a certain number of pixels from the right. Therefore, you’ll need to ensure that any background image that needs to be swapped should be percentage-based. (99.9% of the the time, the background position will need to be 0 so that it can be changed to 100% for RTL.) If you’re taking an existing implementation, background positioning will likely be the biggest hurdle you’ll have to overcome in swapping your interface around. If you make sure your x position is always percentage-based from the beginning, you’ll have a much smoother process ahead of you! Flipping Images This is a more subtle point and one where you’ll really want an expert with the region to weigh in on. In RTL interfaces, users may expect certain icons to also be flipped. Pencil icons that skew to the right in LTR interfaces might need to be swapped to skew to the left, instead. Chat bubbles that come from the left will need to come from the right. The easiest way to handle this is to create new images. Name the LTR versions with -ltr in the name and name the RTL versions with -rtl in the name. CSSJanus will automatically rename all file references from -ltr to -rtl. The Future Thankfully, those within the W3C recognize that CSS should be more agnostic. As a result, they’ve begun introducing new properties that allow the browser to manage the swapping from left to right for us. The CSS3 specification for backgrounds allows for the background-position to be relative to other corners other than the top left by specifying keywords before each position. This will position the background 5px from the bottom right of the element. background-position: right 5px bottom 5px; Opera 11.60 is currently the only browser that supports this syntax. For margin and padding, we have margin-start and margin-end. In LTR interfaces, margin-start would be the same as margin-left and in RTL interfaces, margin-start would be the same as margin-right. Firefox and Webkit support these but with vendor prefixes right now: -webkit-margin-start: 20px; -moz-margin-start: 20px; In the CSS3 Images working draft specification, there’s an image() property that allows us to specify image fallbacks and whether those fallbacks are for LTR or RTL interfaces. background: image('sprite.png' ltr, 'sprite-rtl.png' rtl); Unfortunately, no browser supports this yet but it’s nice to be able to dream of how much easier this will be in the future! Ho Ho Ho Hopefully, after all of this, you’re full of cheer knowing that you’re well on your way to creating interfaces that can go both ways! 2011 Jonathan Snook jonathansnook 2011-12-19T00:00:00+00:00 https://24ways.org/2011/going-both-ways/ ux
224 Go Forth and Make Awesomeness We’ve all dreamed of being a superhero: maybe that’s why we’ve ended up on the web—a place where we can do good deeds and celebrate them on a daily basis. Wear your dreams At age four, I wore my Wonder Woman Underoos around my house, my grandparents’ house, our neighbor’s house, and even around the yard. I wanted to be a superhero when I grew up. I was crushed to learn that there is no school for superheroes—no place to earn a degree in how to save the world from looming evil. Instead, I—like everyone else—was destined to go to ordinary school to focus on ABCs and 123s. Even still, I want to save the world. Intend your goodness Random acts of kindness make a difference. Books, films, and advertising campaigns tout random acts of kindness and the positive influence they can have on the world. But why do acts of kindness have to be so random? Why can’t we intend to be kind? A true superhero wakes each morning intending to perform selfless acts for the community. Why can’t we do the same thing? As a child, my mother taught me to plan to do at least three good deeds each day. And even now, years later, I put on my invisible cape looking for ways to do good. Here are some examples: slowing down to allow another driver in before me from the highway on-ramp bringing a co-worker their favorite kind of coffee or tea sharing my umbrella on a rainy day holding a door open for someone with full hands listening intently when someone shares a story complimenting someone on a job well done thanking someone for a job well done leaving a constructive, or even supportive comment on someone’s blog As you can see, these acts are simple. Doing good and being kind is partially about being aware—aware of the words we speak and the actions we take. Like superheroes, we create our own code of conduct to live by. Hopefully, we choose to put the community before ourselves (within reason) and to do our best not to damage it as we move through our lives. Take a bite out of the Apple With some thought, we can weave this type of thinking and action into our business choices. We can take the simple acts of kindness concept and amplify it a bit. With this amplification, we can be a new kind of superhero. In 1997, during a presentation, Steve Jobs stated Apple’s core value in a simple, yet powerful, sentence: We believe that people with passion can change the world for the better. Apple fan or not, those are powerful words. Define your core Every organization must define its core values. Core values help us to frame, recognize, and understand the principles our organization embodies and practices. It doesn’t matter if you’re starting a new organization or you want to define values within an existing organization. Even if you’re a freelancer, defining core values will help guide your decisions and actions. If you can, work as a team to define core values. Gather the people who are your support system—your business partners, your colleagues, and maybe even a trusted client—this is now your core value creation team. Have a brainstorming session with your team. Let ideas flow. Give equal weight to the things people say. You may not hear everything you thought you might hear—that’s OK. You want the session to be free-flowing and honest. Ask yourself and your team questions like: What do you think my/our/your core values are? What do you think my/our/your priorities are? What do you think my/our/your core values should be? What do you think my/our/your priorities should be? How do you think I/we should treat customers, clients, and each other? How do we want others to treat us? What are my/our/your success stories? What has defined these experiences as successful? From this brainstorming session, you will craft your superhero code of conduct. You will decide what you will and will not do. You will determine how you will and will not act. You’re setting the standards that you will live and work by—so don’t take this exercise lightly. Take your time. Use the exercise as a way to open a discussion about values. Find out what you and your team believe in. Set these values and keep them in place. Write them down and share these with your team and with the world. By sharing your core values, you hold yourself more accountable to them. You also send a strong message to the rest of the world about what type of organization you are and what you believe in. Other organizations and people may decide to align or not to align themselves with you because of your core values. This is good. Chances are, you’ll be happier and more profitable if you work with other organizations and people who share similar core values. Photo: Laura Winn During your brainstorming session, list keywords. Don’t edit. Allow things to take their course. Some examples of keywords might be: Ability · Achievement · Adventure · Ambition · Altruism · Awareness · Balance · Caring · Charity · Citizenship · Collaboration · Commitment · Community · Compassion · Consideration · Cooperation · Courage · Courtesy · Creativity · Democracy · Dignity · Diplomacy · Discipline · Diversity · Education · Efficiency · Energy · Equality · Excellence · Excitement · Fairness · Family · Freedom · Fun · Goodness · Gratefulness · Growth · Happiness · Harmony · Helping · Honor · Hope · Humility · Humor · Imagination · Individuality · Innovation · Integrity · Intelligence · Joy · Justice · Kindness · Knowledge · Leadership · Learning · Loyalty · Meaning · Mindfulness · Moderation · Modesty · Nurture · Openness · Organization · Passion · Patience · Peace · Planning · Principles · Productivity · Purpose · Quality · Reliability · Respectfulness · Responsibility · Security · Sensitivity · Service · Sharing · Simplicity · Stability · Tolerance · Transparency · Trust · Truthfulness · Understanding · Unity · Variety · Vision · Wisdom After you have a list of keywords, create your core values statement using the themes from your brainstorming session. There are no rules: while above, Steve Jobs summed up Apple’s core values in one sentence, Zappos has ten core values: Deliver WOW Through Service Embrace and Drive Change Create Fun and A Little Weirdness Be Adventurous, Creative, and Open-Minded Pursue Growth and Learning Build Open and Honest Relationships With Communication Build a Positive Team and Family Spirit Do More With Less Be Passionate and Determined Be Humble To see how Zappos’ employees embrace these core values, watch the video they created and posted on their website. Dog food is yummy Although I find merit in every keyword listed, I’ve distilled my core values to their simplest form: Make awesomeness. Do good. How do you make awesomeness and do good? You need ambition, balance, collaboration, commitment, fun, and you need every keyword listed to support these actions. Again, there are no rules: your core values can be one sentence or a bulleted list. What matters is being true to yourself and creating core values that others can understand. Before I start any project I ask myself: is there a way to make awesomeness and to do good? If the answer is “yes,” I embrace the endeavor because it aligns with my core values. If the answer is “no,” I move on to a project that supports my core values. Unleash your powers Although every organization will craft different core values, I imagine that you want to be a superhero and that you will define “doing good” (or something similar) as one of your core values. Whether you work by yourself or with a team, you can use the web as a tool to help do good. It can be as simple as giving a free hug, or something a little more complex to help others and help your organization meet the bottom line. Some interesting initiatives that use the web to do good are: Yahoo!: How Good Grows Desigual: Happy Hunters Edge Shave Gel: Anti-irritation campaign Knowing your underlying desire to return to your Underoos-and-cape-sporting childhood and knowing that you don’t always have the opportunity to develop an entire initiative to “do good,” remember that as writers, designers, and developers, we can perform superhero acts on a daily basis by making content, design, and development accessible to the greatest number of people. By considering other people’s needs, we are intentionally performing acts of kindness—we’re doing good. There are many ways to write, design, and develop websites—many of which will be discussed in other 24ways.org articles. As we make content, design, and development decisions—as we develop campaigns and initiatives—we need to keep our core values in mind. It’s easy to make a positive difference in the world. Just be the superhero you’ve always wanted to be. Go forth and make awesomeness. If you would like to do good today, support The United Nations Children’s Fund, an organization that works for children’s rights, their survival, development and protection, by purchasing this year’s 24 ways Annual 2010 created by Five Simple Steps. All proceeds go to UNICEF. 2010 Leslie Jensen-Inman lesliejenseninman 2010-12-04T00:00:00+00:00 https://24ways.org/2010/go-forth-and-make-awesomeness/ business
95 Giving Content Priority with CSS3 Grid Layout Browser support for many of the modules that are part of CSS3 have enabled us to use CSS for many of the things we used to have to use images for. The rise of mobile browsers and the concept of responsive web design has given us a whole new way of looking at design for the web. However, when it comes to layout, we haven’t moved very far at all. We have talked for years about separating our content and source order from the presentation of that content, yet most of us have had to make decisions on source order in order to get a certain visual layout. Owing to some interesting specifications making their way through the W3C process at the moment, though, there is hope of change on the horizon. In this article I’m going to look at one CSS module, the CSS3 grid layout module, that enables us to define a grid and place elements on to it. This article comprises a practical demonstration of the basics of grid layout, and also a discussion of one way in which we can start thinking of content in a more adaptive way. Before we get started, it is important to note that, at the time of writing, these examples work only in Internet Explorer 10. CSS3 grid layout is a module created by Microsoft, and implemented using the -ms prefix in IE10. My examples will all use the -ms prefix, and not include other prefixes simply because this is such an early stage specification, and by the time there are implementations in other browsers there may be inconsistencies. The implementation I describe today may well change, but is also there for your feedback. If you don’t have access to IE10, then one way to view and test these examples is by signing up for an account with Browserstack – the free trial would give you time to have a look. I have also included screenshots of all relevant stages in creating the examples. What is CSS3 grid layout? CSS3 grid layout aims to let developers divide up a design into a grid and place content on to that grid. Rather than trying to fabricate a grid from floats, you can declare an actual grid on a container element and then use that to position the elements inside. Most importantly, the source order of those elements does not matter. Declaring a grid We declare a grid using a new value for the display property: display: grid. As we are using the IE10 implementation here, we need to prefix that value: display: -ms-grid;. Once we have declared our grid, we set up the columns and rows using the grid-columns and grid-rows properties. .wrapper { display: -ms-grid; -ms-grid-columns: 200px 20px auto 20px 200px; -ms-grid-rows: auto 1fr; } In the above example, I have declared a grid on the .wrapper element. I have used the grid-columns property to create a grid with a 200 pixel-wide column, a 20 pixel gutter, a flexible width auto column that will stretch to fill the available space, another 20 pixel-wide gutter and a final 200 pixel sidebar: a flexible width layout with two fixed width sidebars. Using the grid-rows property I have created two rows: the first is set to auto so it will stretch to fill whatever I put into it; the second row is set to 1fr, a new value used in grids that means one fraction unit. In this case, one fraction unit of the available space, effectively whatever space is left. Positioning items on the grid Now I have a simple grid, I can pop items on to it. If I have a <div> with a class of .main that I want to place into the second row, and the flexible column set to auto I would use the following CSS: .content { -ms-grid-column: 3; -ms-grid-row: 2; -ms-grid-row-span: 1; } If you are old-school, you may already have realised that we are essentially creating an HTML table-like layout structure using CSS. I found the concept of a table the most helpful way to think about the grid layout module when trying to work out how to place elements. Creating grid systems As soon as I started to play with CSS3 grid layout, I wanted to see if I could use it to replicate a flexible grid system like this fluid 16-column 960 grid system. I started out by defining a grid on my wrapper element, using fractions to make this grid fluid. .wrapper { width: 90%; margin: 0 auto 0 auto; display: -ms-grid; -ms-grid-columns: 1fr (4.25fr 1fr)[16]; -ms-grid-rows: (auto 20px)[24]; } Like the 960 grid system I was using as an example, my grid starts with a gutter, followed by the first actual column, plus another gutter repeated sixteen times. What this means is that if I want to span two columns, as far as the grid layout module is concerned that is actually three columns: two wide columns, plus one gutter. So this needs to be accounted for when positioning items. I created a CSS class for each positioning option: column position; rows position; and column span. For example: .grid1 {-ms-grid-column: 2;} /* applying this class positions an item in the first column (the gutter is column 1) */ .grid2 {-ms-grid-column: 4;} /* 2nd column - gutter|column 1|gutter */ .grid3 {-ms-grid-column: 6;} /* 3rd column - gutter|column 1|gutter|column2|gutter */ .row1 {-ms-grid-row:1;} .row2 {-ms-grid-row:3;} .row3 {-ms-grid-row:5;} .colspan1 {-ms-grid-column-span:1;} .colspan2 {-ms-grid-column-span:3;} .colspan3 {-ms-grid-column-span:5;} I could then add multiple classes to each element to set the position on on the grid. This then gives me a replica of the fluid grid using CSS3 grid layout. To see this working fire up IE10 and view Example 1. This works, but… This worked, but isn’t ideal. I considered not showing this stage of my experiment – however, I think it clearly shows how the grid layout module works and is a useful starting point. That said, it’s not an approach I would take in production. First, we have to add classes to our markup that tie an element to a position on the grid. This might not be too much of a problem if we are always going to maintain the sixteen-column grid, though, as I will show you that the real power of the grid layout module appears once you start to redefine the grid, using different grids based on media queries. If you drop to a six-column layout for small screens, positioning items into column 16 makes no sense any more. Calculating grid position using LESS As we’ve seen, if you want to use a grid with main columns and gutters, you have to take into account the spacing between columns as well as the actual columns. This means we have to do some calculating every time we place an item on the grid. In my example above I got around this by creating a CSS class for each position, allowing me to think in sixteen rather than thirty-two columns. But by using a CSS preprocessor, I can avoid using all the classes yet still think in main columns. I’m using LESS for my example. My simple grid framework consists of one simple mixin. .position(@column,@row,@colspan,@rowspan) { -ms-grid-column: @column*2; -ms-grid-row: @row*2-1; -ms-grid-column-span: @colspan*2-1; -ms-grid-row-span: @rowspan*2-1; } My mixin takes four parameters: column; row; colspan; and rowspan. So if I wanted to place an item on column four, row three, spanning two columns and one row, I would write the following CSS: .box { .position(4,3,2,1); } The mixin would return: .box { -ms-grid-column: 8; -ms-grid-row: 5; -ms-grid-column-span: 3; -ms-grid-row-span: 1; } This saves me some typing and some maths. I could also add other prefixed values into my mixin as other browsers started to add support. We can see this in action creating a new grid. Instead of adding multiple classes to each element, I can add one class; that class uses the mixin to create the position. I have also played around with row spans using my mixin and you can see we end up with a quite complicated arrangement of boxes. Have a look at example two in IE10. I’ve used the JavaScript LESS parser so that you can view the actual LESS that I use. Note that I have needed to escape the -ms prefixed properties with ~"" to get LESS to accept them. This is looking better. I don’t have direct positioning information on each element in the markup, just a class name – I’ve used grid(x), but it could be something far more semantic. We can now take the example a step further and redefine the grid based on screen width. Media queries and the grid This example uses exactly the same markup as the previous example. However, we are now using media queries to detect screen width and redefine the grid using a different number of columns depending on that width. I start out with a six-column grid, defining that on .wrapper, then setting where the different items sit on this grid: .wrapper { width: 90%; margin: 0 auto 0 auto; display: ~"-ms-grid"; /* escaped for the LESS parser */ -ms-grid-columns: ~"1fr (4.25fr 1fr)[6]"; /* escaped for the LESS parser */ -ms-grid-rows: ~"(auto 20px)[40]"; /* escaped for the LESS parser */ } .grid1 { .position(1,1,1,1); } .grid2 { .position(2,1,1,1); } /* ... see example for all declarations ... */ Using media queries, I redefine the grid to nine columns when we hit a minimum width of 700 pixels. @media only screen and (min-width: 700px) { .wrapper { -ms-grid-columns: ~"1fr (4.25fr 1fr)[9]"; -ms-grid-rows: ~"(auto 20px)[50]"; } .grid1 { .position(1,1,1,1); } .grid2 { .position(2,1,1,1); } /* ... */ } Finally, we redefine the grid for 960 pixels, back to the sixteen-column grid we started out with. @media only screen and (min-width: 940px) { .wrapper { -ms-grid-columns:~" 1fr (4.25fr 1fr)[16]"; -ms-grid-rows:~" (auto 20px)[24]"; } .grid1 { .position(1,1,1,1); } .grid2 { .position(2,1,1,1); } /* ... */ } If you view example three in Internet Explorer 10 you can see how the items reflow to fit the window size. You can also see, looking at the final set of blocks, that source order doesn’t matter. You can pick up a block from anywhere and place it in any position on the grid. Laying out a simple website So far, like a toddler on Christmas Day, we’ve been playing with boxes rather than thinking about what might be in them. So let’s take a quick look at a more realistic layout, in order to see why the CSS3 grid layout module can be really useful. At this time of year, I am very excited to get out of storage my collection of odd nativity sets, prompting my family to suggest I might want to open a museum. Should I ever do so, I’ll need a website, and here is an example layout. As I am using CSS3 grid layout, I can order my source in a logical manner. In this example my document is as follows, though these elements could be in any order I please: <div class="wrapper"> <div class="welcome"> ... </div> <article class="main"> ... </article> <div class="info"> ... </div> <div class="ads"> ... </div> </div> For wide viewports I can use grid layout to create a sidebar, with the important information about opening times on the top righ,t with the ads displayed below it. This creates the layout shown in the screenshot above. @media only screen and (min-width: 940px) { .wrapper { -ms-grid-columns:~" 1fr (4.25fr 1fr)[16]"; -ms-grid-rows:~" (auto 20px)[24]"; } .welcome { .position(1,1,12,1); padding: 0 5% 0 0; } .info { .position(13,1,4,1); border: 0; padding:0; } .main { .position(1,2,12,1); padding: 0 5% 0 0; } .ads { .position(13,2,4,1); display: block; margin-left: 0; } } In a floated layout, a sidebar like this often ends up being placed under the main content at smaller screen widths. For my situation this is less than ideal. I want the important information about opening times to end up above the main article, and to push the ads below it. With grid layout I can easily achieve this at the smallest width .info ends up in row two and .ads in row five with the article between. .wrapper { display: ~"-ms-grid"; -ms-grid-columns: ~"1fr (4.25fr 1fr)[4]"; -ms-grid-rows: ~"(auto 20px)[40]"; } .welcome { .position(1,1,4,1); } .info { .position(1,2,4,1); border: 4px solid #fff; padding: 10px; } .content { .position(1,3,4,5); } .main { .position(1,3,4,1); } .ads { .position(1,4,4,1); } Finally, as an extra tweak I add in a breakpoint at 600 pixels and nest a second grid on the ads area, arranging those three images into a row when they sit below the article at a screen width wider than the very narrow mobile width but still too narrow to support a sidebar. @media only screen and (min-width: 600px) { .ads { display: ~"-ms-grid"; -ms-grid-columns: ~"20px 1fr 20px 1fr 20px 1fr"; -ms-grid-rows: ~"1fr"; margin-left: -20px; } .ad:nth-child(1) { .position(1,1,1,1); } .ad:nth-child(2) { .position(2,1,1,1); } .ad:nth-child(3) { .position(3,1,1,1); } } View example four in Internet Explorer 10. This is a very simple example to show how we can use CSS grid layout without needing to add a lot of classes to our document. It also demonstrates how we can mainpulate the content depending on the context in which the user is viewing it. Layout, source order and the idea of content priority CSS3 grid layout isn’t the only module that starts to move us away from the issue of visual layout being linked to source order. However, with good support in Internet Explorer 10, it is a nice way to start looking at how this might work. If you look at the grid layout module as something to be used in conjunction with the flexible box layout module and the very interesting CSS regions and exclusions specifications, we have, tantalizingly on the horizon, a powerful set of tools for layout. I am particularly keen on the potential separation of source order from layout as it dovetails rather neatly into something I spend a lot of time thinking about. As a CMS developer, working on larger scale projects as well as our CMS product Perch, I am interested in how we better enable content editors to create content for the web. In particular, I search for better ways to help them create adaptive content; content that will work in a variety of contexts rather than being tied to one representation of that content. If the concept of adaptive content is new to you, then Karen McGrane’s presentation Adapting Ourselves to Adaptive Content is the place to start. Karen talks about needing to think of content as chunks, that might be used in many different places, displayed differently depending on context. I absolutely agree with Karen’s approach to content. We have always attempted to move content editors away from thinking about creating a page and previewing it on the desktop. However at some point content does need to be published as a page, or a collection of content if you prefer, and bits of that content have priority. Particularly in a small screen context, content gets linearized, we can only show so much at a time, and we need to make sure important content rises to the top. In the case of my example, I wanted to ensure that the address information was clearly visible without scrolling around too much. Dropping it with the entire sidebar to the bottom of the page would not have been so helpful, though neither would moving the whole sidebar to the top of the screen so a visitor had to scroll past advertising to get to the article. If our layout is linked to our source order, then enabling the content editor to make decisions about priority is really hard. Only a system that can do some regeneration of the source order on the server-side – perhaps by way of multiple templates – can allow those kinds of decisions to be made. For larger systems this might be a possibility; for smaller ones, or when using an off-the-shelf CMS, it is less likely to be. Fortunately, any system that allows some form of custom field type can be used to pop a class on to an element, and with CSS grid layout that is all that is needed to be able to target that element and drop it into the right place when the content is viewed, be that on a desktop or a mobile device. This approach can move us away from forcing editors to think visually. At the moment, I might have to explain to an editor that if a certain piece of content needs to come first when viewed on a mobile device, it needs to be placed in the sidebar area, tying it to a particular layout and design. I have to do this because we have to enforce fairly strict rules around source order to make the mechanics of the responsive design work. If I can instead advise an editor to flag important content as high priority in the CMS, then I can make decisions elsewhere as to how that is displayed, and we can maintain the visual hierarchy across all the different ways content might be rendered. Why frustrate ourselves with specifications we can’t yet use in production? The CSS3 grid layout specification is listed under the Exploring section of the list of current work of the CSS Working Group. While discussing a module at this stage might seem a bit pointless if we can’t use it in production work, there is a very real reason for doing so. If those of us who will ultimately be developing sites with these tools find out about them early enough, then we can start to give our feedback to the people responsible for the specification. There is information on the same page about how to get involved with the disussions. So, if you have a bit of time this holiday season, why not have a play with the CSS3 grid layout module? I have outlined here some of my thoughts on how grid layout and other modules that separate layout from source order can be used in the work that I do. Likewise, wherever in the stack you work, playing with and thinking about new specifications means you can think about how you would use them to enhance your work. Spot a problem? Think that a change to the specification would improve things for a specific use case? Then you have something you could post to www-style to add to the discussion around this module. All the examples are on CodePen so feel free to play around and fork them. 2012 Rachel Andrew rachelandrew 2012-12-18T00:00:00+00:00 https://24ways.org/2012/css3-grid-layout/ code
76 Giving CSS Animations and Transitions Their Place CSS animations and transitions may not sit squarely in the realm of the behaviour layer, but they’re stepping up into this area that used to be pure JavaScript territory. Heck, CSS might even perform better than its JavaScript equivalents in some cases. That’s pretty serious! With CSS’s new tricks blurring the lines between presentation and behaviour, it can start to feel bloated and messy in our CSS files. It’s an uncomfortable feeling. Here are a pair of methods I’ve found to be pretty helpful in keeping the potential bloat and wire-crossing under control when CSS has its hands in both presentation and behaviour. Same eggs, more baskets Structuring your CSS to have separate files for layout, typography, grids, and so on is a fairly common approach these days. But which one do you put your transitions and animations in? The initial answer, as always, is “it depends”. Small effects here and there will likely sit just fine with your other styles. When you move into more involved effects that require multiple animations and some logic support from JavaScript, it’s probably time to choose none of the above, and create a separate CSS file just for them. Putting all your animations in one file is a huge help for code organization. Even if you opt for a name less literal than animations.css, you’ll know exactly where to go for anything CSS animation related. That saves time and effort when it comes to editing and maintenance. Keeping track of which animations are still currently used is easier when they’re all grouped together as well. And as an added bonus, you won’t have to look at all those horribly unattractive and repetitive prefixed @-keyframe rules unless you actually need to. An animations.css file might look something like the snippet below. It defines each animation’s keyframes and defines a class for each variation of that animation you’ll be using. Depending on the situation, you may also want to include transitions here in a similar way. (I’ve found defining transitions as their own class, or mixin, to be a huge help in past projects for me.) // defining the animation @keyframes catFall { from { background-position: center 0;} to {background-position: center 1000px;} } @-webkit-keyframes catFall { from { background-position: center 0;} to {background-position: center 1000px;} } @-moz-keyframes catFall { from { background-position: center 0;} to {background-position: center 1000px;} } @-ms-keyframes catFall { from { background-position: center 0;} to {background-position: center 1000px;} } … // class that assigns the animation .catsBackground { height: 100%; background: transparent url(../endlessKittens.png) 0 0 repeat-y; animation: catFall 1s linear infinite; -webkit-animation: catFall 1s linear infinite; -moz-animation: catFall 1s linear infinite; -ms-animation: catFall 1s linear infinite; } If we don’t need it, why load it? Having all those CSS animations and transitions in one file gives us the added flexibility to load them only when we want to. Loading a whole lot of things that will never be used might seem like a bit of a waste. While CSS has us impressed with its motion chops, it falls flat when it comes to the logic and fine-grained control. JavaScript, on the other hand, is pretty good at both those things. Chances are the content of your animations.css file isn’t acting alone. You’ll likely be adding and removing classes via JavaScript to manage your CSS animations at the very least. If your CSS animations are so entwined with JavaScript, why not let them hang out with the rest of the behaviour layer and only come out to play when JavaScript is supported? Dynamically linking your animations.css file like this means it will be completely ignored if JavaScript is off or not supported. No JavaScript? No additional behaviour, not even the parts handled by CSS. <script> document.write('<link rel="stylesheet" type="text/css" href="animations.css">'); </script> This technique comes up in progressive enhancement techniques as well, but it can help here to keep your presentation and behaviour nicely separated when more than one language is involved. The aim in both cases is to avoid loading files we won’t be using. If you happen to be doing something a bit fancier – like 3-D transforms or critical animations that require more nuanced fallbacks – you might need something like modernizr to step in to determine support more specifically. But the general idea is the same. Summing it all up Using a couple of simple techniques like these, we get to pick where to best draw the line between behaviour and presentation based on the situation at hand, not just on what language we’re using. The power of when to separate and how to reassemble the individual pieces can be even greater if you use preprocessors as part of your process. We’ve got a lot of options! The important part is to make forward-thinking choices to save your future self, and even your current self, unnecessary headaches. 2012 Val Head valhead 2012-12-08T00:00:00+00:00 https://24ways.org/2012/giving-css-animations-and-transitions-their-place/ code
15 Git for Grown-ups You are a clever and talented person. You create beautiful designs, or perhaps you have architected a system that even my cat could use. Your peers adore you. Your clients love you. But, until now, you haven’t *&^#^! been able to make Git work. It makes you angry inside that you have to ask your co-worker, again, for that *&^#^! command to upload your work. It’s not you. It’s Git. Promise. Yes, this is an article about the popular version control system, Git. But unlike just about every other article written about Git, I’m not going to give you the top five commands that you need to memorize; and I’m not going to tell you all your problems would be solved if only you were using this GUI wrapper or that particular workflow. You see, I’ve come to a grand realization: when we teach Git, we’re doing it wrong. Let me back up for a second and tell you a little bit about the field of adult education. (Bear with me, it gets good and will leave you feeling both empowered and righteous.) Andragogy, unlike pedagogy, is a learner-driven educational experience. There are six main tenets to adult education: Adults prefer to know why they are learning something. The foundation of the learning activities should include experience. Adults prefer to be able to plan and evaluate their own instruction. Adults are more interested in learning things which directly impact their daily activities. Adults prefer learning to be oriented not towards content, but towards problems. Adults relate more to their own motivators than to external ones. Nowhere in this list does it include “memorize the five most popular Git commands”. And yet this is how we teach version control: init, add, commit, branch, push. You’re an expert! Sound familiar? In the hierarchy of learning, memorizing commands is the lowest, or most basic, form of learning. At the peak of learning you are able to not just analyze and evaluate a problem space, but create your own understanding in relation to your existing body of knowledge. “Fine,” I can hear you saying to yourself. “But I’m here to learn about version control.” Right you are! So how can we use this knowledge to master Git? First of all: I give you permission to use Git as a tool. A tool which you control and which you assign tasks to. A tool like a hammer, or a saw. Yes, your mastery of your tools will shape the kinds of interactions you have with your work, and your peers. But it’s yours to control. Git was written by kernel developers for kernel development. The web world has adopted Git, but it is not a tool designed for us and by us. It’s no Sass, y’know? Git wasn’t developed out of our frustration with managing CSS files in an increasingly complex ecosystem of components and atomic design. So, as you work through the next part of this article, give yourself a bit of a break. We’re in this together, and it’s going to be OK. We’re going to do a little activity. We’re going to create your perfect Git cheatsheet. I want you to start by writing down a list of all the people on your code team. This list may include: developers designers project managers clients Next, I want you to write down a list of all the ways you interact with your team. Maybe you’re a solo developer and you do all the tasks. Maybe you only do a few things. But I want you to write down a list of all the tasks you’re actually responsible for. For example, my list looks like this: writing code reviewing code publishing tested code to your server(s) troubleshooting broken code The next list will end up being a series of boxes in a diagram. But to start, I want you to write down a list of your tools and constraints. This list potentially has a lot of noun-like items and verb-like items: code hosting system (Bitbucket? GitHub? Unfuddle? self-hosted?) server ecosystem (dev/staging/live) automated testing systems or review gates automated build systems (that Jenkins dude people keep referring to) Brilliant! Now you’ve got your actors and your actions, it’s time to shuffle them into a diagram. There are many popular workflow patterns. None are inherently right or wrong; rather, some are more or less appropriate for what you are trying to accomplish. Centralized workflow Everyone saves to a single place. This workflow may mean no version control, or a very rudimentary version control system which only ever has a single copy of the work available to the team at any point in time. Branching workflow Everyone works from a copy of the same place, merging their changes into the main copy as their work is completed. Think of the branches as a motorcycle sidecar: they’re along for the ride and probably cannot exist in isolation of the main project for long without serious danger coming to the either the driver or sidecar passenger. Branches are a fundamental concept in version control — they allow you to work on new features, bug fixes, and experimental changes within a single repository, but without forcing the changes onto others working from the same branch. Forking workflow Everyone works from their own, independent repository. A fork is an exact duplicate of a repository that a developer can make their own changes to. It can be kept up to date with additional changes made in other repositories, but it cannot force its changes onto another’s repository. A fork is a complete repository which can use its own workflow strategies. If developers wish to merge their work with the main project, they must make a request of some kind (submit a patch, or a pull request) which the project collaborators may choose to adopt or reject. This workflow is popular for open source projects as it enforces a review process. Gitflow workflow A specific workflow convention which includes five streams of parallel coding efforts: master, development, feature branches, release branches, and hot fixes. This workflow is often simplified down to a few elements by web teams, but may be used wholesale by software product teams. The original article describing this workflow was written by Vincent Driessen back in January 2010. But these workflows aren’t about you yet, are they? So let’s make the connections. From the list of people on your team you identified earlier, draw a little circle. Give each of these circles some eyes and a smile. Now I want you to draw arrows between each of these people in the direction that code (ideally) flows. Does your designer create responsive prototypes which are pushed to the developer? Draw an arrow to represent this. Chances are high that you don’t just have people on your team, but you also have some kind of infrastructure. Hopefully you wrote about it earlier. For each of the servers and code repositories in your infrastructure, draw a square. Now, add to your diagram the relationships between the people and each of the machines in the infrastructure. Who can deploy code to the live server? How does it really get there? I bet it goes through some kind of code hosting system, such as GitHub. Draw in those arrows. But wait! The code that’s on your development machine isn’t the same as the live code. This is where we introduce the concept of a branch in version control. In Git, a repository contains all of the code (sort of). A branch is a fragment of the code that has been worked on in isolation to the other branches within a repository. Often branches will have elements in common. When we compare two (or more) branches, we are asking about the difference (or diff) between these two slivers. Often the master branch is used on production, and the development branch is used on our dev server. The difference between these two branches is the untested code that is not yet deployed. On your diagram, see if you can colour-code according to the branch names at each of the locations within your infrastructure. You might find it useful to make a few different copies of the diagram to isolate each of the tasks you need to perform. For example: our team has a peer review process that each branch must go through before it is merged into the shared development branch. Finally, we are ready to add the Git commands necessary to make sense of the arrows in our diagram. If we are bringing code to our own workstation we will issue one of the following commands: clone (the first time we bring code to our workstation) or pull. Remembering that a repository contains all branches, we will issue the command checkout to switch from one branch to another within our own workstation. If we want to share a particular branch with one of our team mates, we will push this branch back to the place we retrieved it from (the origin). Along each of the arrows in your diagram, write the name of the command you are are going to use when you perform that particular task. From here, it’s up to you to be selfish. Before asking Git what command it would like you to use, sketch the diagram of what you want. Git is your tool, you are not Git’s tool. Draw the diagram. Communicate your tasks with your team as explicitly as you can. Insist on being a selfish adult learner — demand that others explain to you, in ways that are relevant to you, how to do the things you need to do today. 2013 Emma Jane Westby emmajanewestby 2013-12-04T00:00:00+00:00 https://24ways.org/2013/git-for-grownups/ code
52 Git Rebasing: An Elfin Workshop Workflow This year Santa’s helpers have been tasked with making a garland. It’s a pretty simple task: string beads onto yarn in a specific order. When the garland reaches a specific length, add it to the main workshop garland. Each elf has a specific sequence they’re supposed to chain, which is given to them via a work order. (This is starting to sound like one of those horrible calculus problems. I promise it isn’t. It’s worse; it’s about Git.) For the most part, the system works really well. The elves are able to quickly build up a shared chain because each elf specialises on their own bit of garland, and then links the garland together. Because of this they’re able to work independently, but towards the common goal of making a beautiful garland. At first the elves are really careful with each bead they put onto the garland. They check with one another before merging their work, and review each new link carefully. As time crunches on, the elves pour a little more cheer into the eggnog cooler, and the quality of work starts to degrade. Tensions rise as mistakes are made and unkind words are said. The elves quickly realise they’re going to need a system to change the beads out when mistakes are made in the chain. The first common mistake is not looking to see what the latest chain is that’s been added to the main garland. The garland is huge, and it sits on a roll in one of the corners of the workshop. It’s a big workshop, so it is incredibly impractical to walk all the way to the roll to check what the last link is on the chain. The elves, being magical, have set up a monitoring system that allows them to keep a local copy of the main garland at their workstation. It’s an imperfect system though, so the elves have to request a manual refresh to see the latest copy. They can request a new copy by running the command git pull --rebase=preserve (They found that if they ran git pull on its own, they ended up with weird loops of extra beads off the main garland, so they’ve opted to use this method.) This keeps the shared garland up to date, which makes things a lot easier. A visualisation of the rebase process is available. The next thing the elves noticed is that if they worked on the main workshop garland, they were always running into problems when they tried to share their work back with the rest of the workshop. It was fine if they were working late at night by themselves, but in the middle of the day, it was horrible. (I’ve been asked not to talk about that time the fight broke out.) Instead of trying to share everything on their local copy of the main garland, the elves have realised it’s a lot easier to work on a new string and then knot this onto the main garland when their pattern repeat is finished. They generate a new string by issuing the following commands: git checkout master git checkout -b 1234_pattern-name 1234 represents the work order number and pattern-name describes the pattern they’re adding. Each bead is then added to the new link (git add bead.txt) and locked into place (git commit). Each elf repeats this process until the sequence of beads described in the work order has been added to their mini garland. To combine their work with the main garland, the elves need to make a few decisions. If they’re making a single strand, they issue the following commands: git checkout master git merge --ff-only 1234_pattern-name To share their work they publish the new version of the main garland to the workshop spool with the command git push origin master. Sometimes this fails. Sharing work fails because the workshop spool has gotten new links added since the elf last updated their copy of the main workshop spool. This makes the elves both happy and sad. It makes them happy because it means the other elves have been working too, but it makes them sad because they now need to do a bit of extra work to close their work order. To update the local copy of the workshop spool, the elf first unlinks the chain they just linked by running the command: git reset --merge ORIG_HEAD This works because the garland magic notices when the elves are doing a particularly dangerous thing and places a temporary, invisible bookmark to the last safe bead in the chain before the dangerous thing happened. The garland no longer has the elf’s work, and can be updated safely. The elf runs the command git pull --rebase=preserve and the changes all the other elves have made are applied locally. With these new beads in place, the elf now has to restring their own chain so that it starts at the right place. To do this, the elf turns back to their own chain (git checkout 1234_pattern-name) and runs the command git rebase master. Assuming their bead pattern is completely unique, the process will run and the elf’s beads will be restrung on the tip of the main workshop garland. Sometimes the magic fails and the elf has to deal with merge conflicts. These are kind of annoying, so the elf uses a special inspector tool to figure things out. The elf opens the inspector by running the command git mergetool to work through places where their beads have been added at the same points as another elf’s beads. Once all the conflicts are resolved, the elf saves their work, and quits the inspector. They might need to do this a few times if there are a lot of new beads, so the elf has learned to follow this update process regularly instead of just waiting until they’re ready to close out their work order. Once their link is up to date, the elf can now reapply their chain as before, publish their work to the main workshop garland, and close their work order: git checkout master git merge --ff-only 1234_pattern-name git push origin master Generally this process works well for the elves. Sometimes, though, when they’re tired or bored or a little drunk on festive cheer, they realise there’s a mistake in their chain of beads. Fortunately they can fix the beads without anyone else knowing. These tools can be applied to the whole workshop chain as well, but it causes problems because the magic assumes that elves are only ever adding to the main chain, not removing or reordering beads on the fly. Depending on where the mistake is, the elf has a few different options. Let’s pretend the elf has a sequence of five beads she’s been working on. The work order says the pattern should be red-blue-red-blue-red. If the sequence of beads is wrong (for example, blue-blue-red-red-red), the elf can remove the beads from the chain, but keep the beads in her workstation using the command git reset --soft HEAD~5. If she’s been using the wrong colours and the wrong pattern (for example, green-green-yellow-yellow-green), she can remove the beads from her chain and discard them from her workstation using the command git reset --hard HEAD~5. If one of the beads is missing (for example, red-blue-blue-red), she can restring the beads using the first method, or she can use a bit of magic to add the missing bead into the sequence. Using a tool that’s a bit like orthoscopic surgery, she first selects a sequence of beads which contains the problem. A visualisation of this process is available. Start the garland surgery process with the command: git rebase --interactive HEAD~4 A new screen comes up with the following information (the oldest bead is on top): pick c2e4877 Red bead pick 9b5555e Blue bead pick 7afd66b Blue bead pick e1f2537 Red bead The elf adjusts the list, changing “pick” to “edit” next to the first blue bead: pick c2e4877 Red bead edit 9b5555e Blue bead pick 7afd66b Blue bead pick e1f2537 Red bead She then saves her work and quits the editor. The garland magic has placed her back in time at the moment just after she added the first blue bead. She needs to manually fix up her garland to add the new red bead. If the beads were files, she might run commands like vim beads.txt and edit the file to make the necessary changes. Once she’s finished her changes, she needs to add her new bead to the garland (git add --all) and lock it into place (git commit). This time she assigns the commit message “Red bead – added” so she can easily find it. The garland magic has replaced the bead, but she still needs to verify the remaining beads on the garland. This is a mostly automatic process which is started by running the command git rebase --continue. The new red bead has been assigned a position formerly held by the blue bead, and so the elf must deal with a merge conflict. She opens up a new program to help resolve the conflict by running git mergetool. She knows she wants both of these beads in place, so the elf edits the file to include both the red and blue beads. With the conflict resolved, the elf saves her changes and quits the mergetool. Back at the command line, the elf checks the status of her work using the command git status. rebase in progress; onto 4a9cb9d You are currently rebasing branch '2_RBRBR' on '4a9cb9d'. (all conflicts fixed: run "git rebase --continue") Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: beads.txt Untracked files: (use "git add <file>..." to include in what will be committed) beads.txt.orig She removes the file added by the mergetool with the command rm beads.txt.orig and commits the edits she just made to the bead file using the commands: git add beads.txt git commit --message "Blue bead -- resolved conflict" With the conflict resolved, the elf is able to continue with the rebasing process using the command git rebase --continue. There is one final conflict the elf needs to resolve. Once again, she opens up the visualisation tool and takes a look at the two conflicting files. She incorporates the changes from the left and right column to ensure her bead sequence is correct. Once the merge conflict is resolved, the elf saves the file and quits the mergetool. Once again, she cleans out the backup file added by the mergetool (rm beads.txt.orig) and commits her changes to the garland: git add beads.txt git commit --message "Red bead -- resolved conflict" and then runs the final verification steps in the rebase process (git rebase --continue). The verification process runs through to the end, and the elf checks her work using the command git log --oneline. 9269914 Red bead -- resolved conflict 4916353 Blue bead -- resolved conflict aef0d5c Red bead -- added 9b5555e Blue bead c2e4877 Red bead She knows she needs to read the sequence from bottom to top (the oldest bead is on the bottom). Reviewing the list she sees that the sequence is now correct. Sometimes, late at night, the elf makes new copies of the workshop garland so she can play around with the bead sequencer just to see what happens. It’s made her more confident at restringing beads when she’s found real mistakes. And she doesn’t mind helping her fellow elves when they run into trouble with their beads. The sugar cookies they leave her as thanks don’t hurt either. If you would also like to play with the bead sequencer, you can get a copy of the branches the elf worked. Our lessons from the workshop: By using rebase to update your branches, you avoid merge commits and keep a clean commit history. If you make a mistake on one of your local branches, you can use reset to take commits off your branch. If you want to save the work, but uncommit it, add the parameter --soft. If you want to completely discard the work, use the parameter, --hard. If you have merged working branch changes to the local copy of your master branch and it is preventing you from pushing your work to a remote repository, remove these changes using the command reset with the parameter --merge ORIG_HEAD before updating your local copy of the remote master branch. If you want to make a change to work that was committed a little while ago, you can use the command rebase with the parameter --interactive. You will need to include how many commits back in time you want to review. 2015 Emma Jane Westby emmajanewestby 2015-12-07T00:00:00+00:00 https://24ways.org/2015/git-rebasing/ code
118 Ghosts On The Internet By rights the internet should be full of poltergeists, poor rootless things looking for their real homes. Many events on the internet are not properly associated with their correct timeframe. I don’t mean a server set to the wrong time, though that happens too. Much of the content published on the internet is separated from any proper reference to its publication time. What does publication even mean? Let me tell you a story… “It is 2019 and this is Kathy Clees reporting on the story of the moment, the shock purchase of Microsoft by Apple Inc. A Internet Explorer security scare story from 2008 was responsible, yes from 11 years ago, accidently promoted by an analyst, who neglected to check the date of their sources.” If you think this is fanciful nonsense, then cast your mind back to September 2008, this story in Wired or The Times (UK) about a huge United Airlines stock tumble. A Florida newspaper had a automated popular story section. A random reader looking at a story about United’s 2002 Bankruptcy proceedings caused this story to get picked up by Google’s later visit to the South Florida Sun Sentinel’s news home page. The story was undated, Google’s news engine apparently gave it a 2008 date, an analyst picked it up and pushed it to Bloomberg and within minutes the United stock was tumbling. Their stock price dropped from $12 to $3, then recovered to $11 over the day. An eight percent fall in share price over a mis-configured date Completing this out of order Christmas Carol, lets look at what is current practice and how dates are managed, we might even get to clank some chains. Publication date used to be inseparable from publication, the two things where stamped on the same piece of paper. How can we determine when things have been published, now? Determining publication dates Time as defined by http://www.w3.org/TR/NOTE-datetime extends ISO 8601, mandating the use of a year value. This is pretty well defined, we can even get very accurate timings down to milliseconds, Ruby and other languages can even handle Calendar reformation. So accuracy is not the issue. One problem is that there are many dates which could be interpreted as the publication date. Publication can mean any of date written or created; date placed on server; last modified date; or the current date from the web server. Created and modified have parallels with file systems, but the large number of database driven websites means that this no longer holds much meaning, as there are no longer any files. Checking web server HEAD may also not correspond, it might give the creation time for the HTML file you are viewing or it might give the last modified time for a file from disk. It is too unreliable and lacking in context to be of real value. So if the web server will not help, then how can we get the right timeframe for our content? We are left with URLs and the actual page content. Looking at Flickr, this picture (by Douglas County History Research Center) has four date values which can be associated with it. It was taken around 1900, scanned in 1992 and placed on Flickr on July 29th, 2008 and replaced later that day. Which dates should be represented here? This is hard question to answer, but currently the date of upload to Flickr is the best represented in terms of the date URL, /photos/douglascountyhistory/archives/date-posted/2008/07/29/, plus some Dublin Core RDF for the year. Flickr uses 2008 as the value for this image. Not accurate, but a reasonable compromise for the millions of other images on their site. Flickr represents location much better than it represents time. For the most part this is fine, but once you go back in time to the 1800s then the maps of the world start to change a lot and you need to reference both time and place. The Google timeline search offers another interesting window on the world, showing results organised by decade for any search term. Being able to jump to a specific occurrence of a term makes it easier to get primary results rather than later reporting. The 1918 “Spanish flu” results jump out in this timeline. Any major news event will have multiple analysis articles after the event, finding the original reporting of hurricane Katrina is harder now. Many publishers are putting older content online, e.g. Harpers or Nature or The Times, often these use good date based URLs, sometimes they are unhelpful database references. If this content is available for free, then how much better would it be to provide good metadata on date of publication. Date based URLs A quick word on date based URLs, they can be brilliant at capturing first published date. However they can be hard to interpret. Is /03/04 a date in March or April, what about 08/03/04? Obviously 2008/03/04 is easier to understand, it is probably March 4th. Including a proper timestamp in the page content avoid this kind of guesswork. Many sites represent the date as a plain text string; a few hook an HTML class of date around it, a very few provide an actual timestamp. Associating the date with the individual content makes it harder to get the date wrong. Movable Type and TypePad are a notable exceptions, they will embed Dublin Core RDF to represent each posting e.g. dc:date="2008-12-18T02:57:28-08:00". WordPress doesn’t support date markup out of the box, though there is a patch and a howto for hAtom available. In terms of newspapers, the BBC use <meta name="OriginalPublicationDate" content="2008/12/18 18:52:05" /> along with opaque URLs such as http://news.bbc.co.uk/1/hi/technology/7787335.stm. The Guardian use nice clear URLs http://www.guardian.co.uk/business/2008/dec/18/car-industry-recession but have no marked up date on the page. The New York Times are similar to the Guardian with nice URLs, http://www.nytimes.com/2008/12/19/business/19markets.html, but again no timestamps. All of these papers have all the data available, but it is not marked up in a useful manner. Syndication formats Syndication formats are better at supporting dates, RSS uses RFC 822 for dates, just like email so dates such as Wed, 17 Dec 2008 12:52:40 GMT are valid, with all the white space issues that entails. The Atom syndication format uses the much clearer http://tools.ietf.org/html/rfc3339 with timestamps of the form 1996-12-19T16:39:57-08:00. Both syndication formats encourage the use of last modified. This is understandable, but a pity as published date is a very useful value. The Atom syndication format supports “published” and mandates “updated” as timestamps, see the Atom RFC 4287 for more detail. Marking up dates However the aim of this short article is to encourage you to use microformats or RDF to encode dates. A good example of this is Twitter, they use hAtom for each individual entry, http://twitter.com/zzgavin/status/1065835819 contains the following markup, which represents a human and a machine readable version of the time of that tweet. <span class="published" title="2008-12-18T22:01:27+00:00">about 3 hours ago</span> The spec for datetime is still draft at the minute and there is still ongoing conversation around the right format and semantics for representing date and time in microformats, see the datetime design pattern for details. The hAtom example page shows the minimal changes required to implement hAtom on well formed blog post content and for other less well behaved content. You have the information already in your content publication systems, this is not some additional onerous content entry task, simply some template formatting. I started to see this as a serious issue after reading Stewart Brand’s Clock of the Long Now about five years ago. Brand’s book explores the issues of short term thinking that permeate our society, thinking beyond the end of the financial year is a stretch for many people. The Long Now has a world view of a 10,000 year timeframe, see http://longnow.org/ for much more information. Freebase from Long Now Board member Danny Hillis, supports dates quite well – see the entry for A Christmas Carol. In conclusion I feel we should be making it easier for people searching for our content in the future. We’ve moved through tagging content and on to geo-tagging content. Now it is time to get the timestamps right on our content. How do I know when something happened and how can I find other things that happened at the same time is a fair question. This should be something I can satisfy simply and easily. There are a range of tools available to us in either hAtom or RDF to specify time accurately alongside the content, so what is stopping you? Thinking of the long term it is hard for us to know now what will be of relevance for future generations, so we should aim to raise the floor for publishing tools so that all content has the right timeframe associated with it. We are moving from publishing words and pictures on the internet to being able to associate publication with an individual via XFN and OpenID. We can associate place quite well too, the last piece of useful metadata is timeframe. 2008 Gavin Bell gavinbell 2008-12-20T00:00:00+00:00 https://24ways.org/2008/ghosts-on-the-internet/ ux
268 Getting the Most Out of Google Analytics Something a bit different for today’s 24 ways article. For starters, I’m not a designer or a developer. I’m an evil man who sells things to people on the internet. Second, this article will likely be a little more nebulous than you’re used to, since it covers quite a number of points in a relatively short space. This isn’t going to be the complete Google Analytics Conversion University IQ course compressed into a single article, obviously. What it will be, however, is a primer on setting up and using Google Analytics in real life, and a great deal of what I’ve learned using Google Analytics nearly every working day for the past six (crikey!) years. Also, to be clear, I’ll be referencing new Google Analytics here; old Google Analytics is for loooosers (and those who want reliable e-commerce conversion data per site search term, natch). You may have been running your Analytics account for several years now, dipping in and out, checking traffic levels, seeing what’s popular… and that’s about it. Google Analytics provides so much more than that, but the number of reports available can often intimidate users, and documentation and case studies on their use are minimal at best. Let’s start! Setting up your Analytics profile Before we plough on, I just want to run through a quick checklist that some basic settings have been enabled for your profile. If you haven’t clicked it, click the big cog on the top-right of Google Analytics and we’ll have a poke about. If you have an e-commerce site, e-commerce tracking has been enabled
 If your site has a search function, site search tracking has been enabled. Query string parameters that you do not want tracked as separate pages have been excluded (for example, any parameters needed for your platform to function, otherwise you’ll get multiple entries for the same page appearing in your reports) Filters have been enabled on your main profile to exclude your office IP address and any IPs of people who frequently access the site for work purposes. In decent numbers they tend to throw data off a tad.
 You may also find the need to set up multiple profiles prefiltered for specific audience segments. For example, at Lovehoney we have seventeen separate profiles that allow me quick access to certain countries, devices and traffic sources without having to segment first. You’ll also find load time for any complex reports much improved. Use the same filter screen as above to set up a series of profiles that only include, say, mobile visits, or UK visitors, so you can quickly analyse important segments. Matt, what’s a segment? A segment is a subsection of your visitor base, which you define and then call on in reports to see specific data for that subsection. For example, in this report I’ve defined two segments, the first for IE6 users and the second for IE7. Segments are easily created by clicking the Advanced Segments tabs at the top of any report and clicking +New Custom Segment. What does your site do? Understanding the goals of your site is an oft-covered topic, but it’s necessary not just to form a better understand of your business and prioritize your time. Understanding what you wish visitors to do on your site translates well into a goal-driven analytics package like Google Analytics. Every site exists essentially to sell something, either financially through e-commerce, or to sell an idea or impart information, get people to download a CV or enquire about service, or to sell space on that website to advertisers. If the site did not provide a positive benefit to its owners, it would not have a reason for being. Once you have understood the reason why you have a site, you can map that reason on to one of the three goal types Google Analytics provides. E-commerce This conversion type registers transactions as part of a sales process which requires a monetary value, what products have been bought, an SKU (stock keeping unit), affiliation (if you’re then attributing the sale to a third party or franchise) and so on. The benefit of e-commerce tracking is not only assigning non-arbitrary monetary value to behaviour of visitors on your site, as well as being able to see ancillary costs such as shipping, but seeing product-level information, like which products are preferred from various channels, popular categories, and so on. However, I find the e-commerce tracking options also useful for non-e-commerce sites. For example, if you’re offering downloads or subscriptions and having an email address or user’s details is worth something to you, you can set up e-commerce tracking to understand how much value your site is bringing. For example, an email address might be worth 20p to you, but if it also includes a name it’s worth 50p. A contact telephone number is worth £2, and so on. Page goals Page goals, unsurprisingly, track a visit to a page (often with a sequence of pages leading up to that page). This is what’s referred to as a goal funnel, and is generally used to track how visitors behave in a multistep checkout. Interestingly, the page doesn’t have to actually exist. For example, if you have a single page checkout, you can register virtual page views using trackPageview() when a visitor clicks into a particular section of the checkout or other form. If your site is geared towards getting someone to a particular page, but where there isn’t a transaction (for example, a subscription page) this is for you. There are also behavioural goals, such as time on site and number of pages viewed, which are geared towards sites that make money from advertising. But, going back to the page goals, these can be abstracted using regular expressions, meaning that you can define a funnel based on page type rather than having to set individual folders. In this example, I’ve created regexes for the main page types on my site, so I can create a wide funnel that captures visitors from where they enter through to checkout. Events Event tracking registers a predefined event, such as playing a video, or some interaction that can trigger JavaScript, such as a Tweet This button. Events can then be triggered using the trackEvent() call. If you want someone to complete watching a video, you would code your player to fire trackEvent() upon completion. While I don’t use events as goals, I use events elsewhere to see how well a video play aids to conversion. This not only helps me justify the additional spend on creating video content, but also quickly highlights which videos are underperforming as sales tools. What a visitor can tell you 
Now you have some proper goals set up, we can start to see how changes in content (on-site and external) affect those goals. Ultimately, when a visitor comes to your site, they bring information with them: where they came from (a search engine – including: keyword searched for; a referral; direct; affiliate; or ad campaign) demographics (country; whether they’re new or returning, within thirty days) technical information (browser; screen size; device; bandwidth) site-specific information (landing page; next click; previous values assigned to them as custom variables*) * A note about custom variables. There’s no hope in hell that I can cover custom variables in this article. Go research them. Custom variables are the single best way to hack Google Analytics and bend it to your will. Custom variables allow you to record anything you want about a visitor, which that visitor will then carry around with them between visits. It’s also great for plugging other services into Google Analytics (as shown by the marvelous way Visual Website Optimizer allows you to track and segment tests within the GA interface). Just make sure not to breach the terms of service, eh? CSI your website Police procedural TV shows are all the same: the investigators are called to a crime and come across a clue; there’s then an autopsy; new evidence leads them to a new location; they find a new clue; they put two and two together; they solve the mystery. This is your life now. Exciting! So, now you’re gathering a wealth of information about what sort of people visit your site, what they do when they’re there, and what eventually gets them to drive value to you. It’s now your job to investigate all these little clues to see which types of people drive the most value, and what you can change to improve it. Maybe not that exciting. However, Google Analytics comes pre-armed with extensive reports for you to delve into. As an e-commerce guy (as opposed to a page goal guy) my day pretty much follows the pattern below. Look at e-commerce conversion rate by traffic source compared to the same day in the previous week and previous month. As ours is an e-commerce site, we have weekly and monthly trends. A big spike on Sundays and Mondays, and payday towards the end of the month is always good; on the third week of a month there tends to be a lull. Spend time letting your Google Analytics data brew, understand your own trends and patterns, and you’ll start to get a feel for when something isn’t quite right. Traffic Sources → Sources → All Traffic Look at the conversion rate by landing page for any traffic source that feels significantly different to what’s expected. Check bounce rates, drill down to likely landing pages and check search keyword or referral site to see if it’s a particular subset of visitor. You can do this by clicking Secondary Dimension and choosing Keyword or Source. If it’s direct, choose Visitor Type to break down by new or returning visitor. Content → Site Content → Landing Pages I then tend to flip into Content Drilldown to see what the next clicks were from those landing pages, and whether they changed significantly to the date I’m comparing with. If they have, that’s usually an indicator of changed content (or its relevancy). Remember, if a bunch of people have found their way to your page via a method you’re not expecting (such as a mention on a Spanish radio station – this actually happened to me once), while the content hasn’t changed, the relevancy of it to the audience may have. Content → Site Content → Content Drilldown Once I have an idea of what content was consumed, and whether it was relevant to the user, I then look at the visitor specifics, such as browser or demographic data, to see again whether the change was limited to a specific subset. Site speed, for example, is normally a good factor towards bounce rate, so compare that with previous data as well. Now, to be investigating at this level you still need a serious amount of data, in order to tell what’s a significant change or not. If you’re struggling with a small number of visitors, you might find reporting on a weekly or fortnightly basis more appropriate. However, once you’ve looked into the basics of why changes happen to the value of your site, you’ll soon find yourself limited by the reports offered in Standard Reporting. So, it’s time to build your own. Hooray! Custom reporting Google Analytics provides the tools to build reports specific to the types of investigations you frequently perform. Welcome to my world. Custom reports are quite simple to build: first, you determine the metric you want the report to cover (number of visitors, bounce rate, conversion rate, and so on), then choose a set of dimensions that you’d like to segment the report by (say, the source of the traffic, and whether they were new or returning users). You can filter the report, including or excluding particular dimension values, and you can assign the report to any of the profiles you created earlier. In the example below, I’ve created a report that shows me visits and conversion rate for any Google traffic that landed directly only on a product page. I can then drill down on each product page to see the complete phrases use to search. I can use this information in two ways: I can see which products aren’t converting, which shows me where I need to work harder on merchandising. I can give this information to my content team, showing them the actual phrases visitors used to reach our product content, helping them write better targeted product descriptions. The possibilities here are nearly endless, but here are a few examples of reports I find useful: Non-brand inbound search By creating a report that shows inbound search traffic which doesn’t include your brand, you can see more clearly the behaviour of visitors most likely to be unfamiliar with your site and brand values, without having to rely on the clumsy new or returning demographic date. Traffic/conversion/sales by hour This is pure stats porn, but actually more useful than real-time data. By seeing this data broken down at an hourly level, you can not only compare the current day to previous days, but also see the best performing times for email broadcasts and tweets. Visits, load time, conversion and sales by page and browser Page speed can often kill conversion rates, but it’s difficult to prove the value of focusing on speed in monetary terms. Having this report to hand helps me drive Operation Greenbelt, our effort to get into the sub-1.5 second band in Google Webmaster Tools. Useful things you can’t do in custom reporting If you have a search function on your website, then Conversion Rate and Products Bought by Site Search Term is an incredibly useful report that allows you to measure the effectiveness of your site’s search engine at returning products and content related to the search term used. By including the products actually bought by visitors who searched for each term, you can use this information to better searchandise these results, escalating high propensity and high value products to the top of the results. However, it’s not possible to get this information out of new Google Analytics. Try it, select the following in the report builder: Metrics: total unique searches; e-commerce or goal conversion rate Dimensions: search term; product You’ll see that the data returned is a little nonsensical, though a 2,000% conversion rate would be nice. However, you can get more accurate information using advanced segments. By creating individual segments to define users who have searched for a particular term, you can run the sales performance and product performance reports as normal. It’s laborious, but it teaches a good lesson: data that seems inaccessible can normally be found another way! Reporting infrastructure Now that you have a series of reports that you can refer to on a daily or weekly basis, it’s time to put together a regular reporting infrastructure. Even if you’re not reporting to someone, having a set of key performance indicators that you can use to see how your performance is improving over time allows you to set yourself business goals on a monthly and annual basis. For my own reporting, I take some high-level metrics (such as visitors, conversion rate and average order value), and segment them by traffic source and, separately, landing page. These statistics I record weekly and report: current week compared with previous week same week previous year (if available) 4 week average 13 week average 52 week average (if available) This takes into account weekly, monthly, seasonal and annual trends, and gives you a much clearer view of your performance. Getting data in other ways If you’re using Google Analytics frequently, with any large site you’ll come to a couple of conclusions: Doing any kind of practical comparative analysis is unwieldy. Boy, Google Analytics is slow! As you work with bigger datasets and put together more complex queries, you’ll see the loading graphic more than you’ll see actual data. So when you reach that level, there are ways to completely bypass the Google Analytics interface altogether, and get data into your own spreadsheet application for manipulation. Data Feed Query Explorer If you just want to pull down some quick statistics but still use complex filters and exotic metric and dimension combinations, the Data Feed Query Explorer is the quickest way of doing so. Authenticate with your Google Analytics account, select a profile, and you can start selecting metrics and dimensions to be generated in a handy, selectable tabulated format. Google Analytics API If you’re feeling clever, you can bypass having to copy and paste data by pulling in directly into Excel, Google Docs or your own application using the Google Analytics API. There are several scripts and plugins available to do this. I use Automate Analytics Google Docs code (there’s also a paid version that simplifies setup and creates some handy reports for you). New shiny things Well, now that that’s over, I can show you some cool stuff. Well, at least it’s cool to me. Google Analytics is being constantly improved and new functionality is introduced nearly every month. Here are a couple of my favourites. Multichannel attribution Not every visitor converts on your site on the first visit. They may not even do so on the second visit, or third. If they convert on the fourth visit, but each time they visit they do so via a different channel (for example, Search PPC, Search Organic, Direct, Email), which channel do you attribute the conversion to? The last channel, or the first? Dilemma! Google now has a Multichannel Attribution report, available in the Conversion category, which shows how each channel assists in converting, the overlap between channels, and where in the process that channel was important. For example, you may have analysed your blog traffic from Twitter and become disheartened that not many people were subscribing after visiting from Twitter links, but instead your high-value subscribers were coming from natural search. On the face of it, you’d spend less time tweeting, but a multichannel report may tell you that visitors first arrived via a Twitter link and didn’t subscribe, but then came back later after searching for your blog name on Google, after which they did. Don’t pack Twitter in yet! Visitor and goal flow Visitor and goal flow are amazing reports that help you visualize the flow of traffic through your site and, ultimately, into your checkout funnel or similar goal path. Flow reports are perfect for understanding drop-off points in your process, as well as what the big draws are on each page. Previously, if you wanted to visualize this data you had to set up several abstracted microgoals and chain them together in custom reports. Frankly, it was a pain in the arse and burned through your precious and limited goal allocation. Visitor flow bypasses all that and produces the report in an interactive flow diagram. While it doesn’t show you the holy grail of conversion likelihood by each path, you can segment visitor flow so that you can see very specifically how different segments of your visitor base behave. Go play with it now! 2011 Matt Curry mattcurry 2011-12-18T00:00:00+00:00 https://24ways.org/2011/getting-the-most-out-of-google-analytics/ business
206 Getting Hardboiled with CSS Custom Properties Custom Properties are a fabulous new feature of CSS and have widespread support in contemporary browsers. But how do we handle browsers without support for CSS Custom Properties? Do we wait until those browsers are lying dead in a ditch before we use them? Do we tool up and prop up our CSS using a post-processor? Or do we get tough? Do we get hardboiled? Previously only pre-processing tools like LESS and Sass enabled developers to use variables in their stylesheets, but now Custom Properties have brought variables natively to CSS. How do you write a custom property? It’s hardly a mystery. Simply add two dashes to the start of a style rule. Like this: --color-text-default : black; If you’re more the underscore type, try this: --color_text_default : black; Hyphens or underscores are allowed in property names, but don’t be a chump and try to use spaces. Custom property names are also case-sensitive, so --color-text-default and --Color_Text_Default are two distinct properties. To use a custom property in your style rules, var() tells a browser to retrieve the value of a property. In the next example, the browser retrieves the black colour from the color-text-default variable and applies it to the body element: body { color : var(--color-text-default); } Like variables in LESS or Sass, CSS Custom Properties mean you don’t have to be a dumb mug and repeat colour, font, or size values multiple times in your stylesheets. Unlike a preprocessor variable though, CSS Custom Properties use the cascade, can be modified by media queries and other state changes, and can also be manipulated by Javascript. (Serg Hospodarets wrote a fabulous primer on CSS Custom Properties where he dives deeper into the code and possible applications.) Browser support Now it’s about this time that people normally mention browser support. So what about support for CSS Custom Properties? Current versions of Chrome, Edge, Firefox, Opera, and Safari are all good. Internet Explorer 11 and before? Opera Mini? Nasty. Sound familiar? Can I Use css-variables? Data on support for the css-variables feature across the major browsers from caniuse.com. Not to worry, we can manually check for Custom Property support in a browser by using an @support directive, like this: --color-text-default : black; body { color : black; } @supports ((--foo : bar)) { body { color : var(--color-text-default); } } In that example we first set body element text to black and then override that colour with a Custom Property value when the browser supports our fictitious foo bar variable. Substitutions If we reference a variable that hasn’t been defined, that won’t be a problem as browsers are smart enough to ignore the value altogether. If we need a cast iron alibi, use substitutions to specify a fall-back value. body { color : var(--color-text-default, black); } Substitutions are similar to font stacks in that they contain a comma separated list of values. If there’s no value associated with a property, a browser will ignore it and move onto the next value in the list. Post-processing Of course we could use a post-processor plugin to turn Custom Properties into plain CSS, but hang on one goddam minute kiddo. Haven’t we been down this road before? Didn’t we engineer elaborate workarounds to enable us to use ‘advanced’ CSS3 properties like border-radius, CSS columns, and Flexbox in the past? We did what we had to do to get the job done, but came away feeling dirty. I think there’s a better way, one that allows us to enjoy the benefits of CSS Custom Properties in browsers that support them, while providing an appropriate, but not identical experience, for people who use less capable browsers. Guess what, I’ve been down this road before too. 2Tone Stuff & Nonsense When Internet Explorer 6 was the big dumb browser everyone hated, I served two different designs on my website. For the modern browsers of the time, mod arrows and targets were everywhere in glorious red, white, and blue, and I implemented all of them using CSS attribute selectors which were considered advanced at the time: [class="banner"] { background-colour : red; } Internet Explorer 6 ignored any selectors it didn’t understand, so people using that browser saw a simpler black and white, 2Tone-based design that I’d implemented for them using class selectors: .banner { background-colour : black; } [class="banner"] { background-colour : red; } You don’t have to be a detective to find out that most people thought I’d lost my wits, but Microsoft even used my website as a reference when they tested attribute selectors in Internet Explorer 7. They did, as I suggested, “Stomp to da betta browser.” Dumb browsers look the other way So how does this approach relate to tackling any lack of support for CSS Custom Properties? How can we take advantage of them without worrying about browsers with no support and having to implement complex workarounds, or spending hours specifying fallbacks that perfectly match our designs? Turns out, the answer is built into CSS, and always has been, because when browsers don’t know what they’re looking at, they look away. All we have to do is first specify values for a simpler design first, and then follow that up with the values in our CSS Custom Properties: body { color : black; color : var(--color-text-default, black); } All browsers understand the first value (black,) and if they‘re smart enough to understand the second (var(--color-text-default)), they’ll use it and override the first. If they’re too damn stupid to understand the custom property value, they’ll ignore it. Nobody dies. Repeat this for every style that contains a variable, baking an alternative, perhaps simpler design into your stylesheets for people who use less capable browsers, just like I did with Stuff & Nonsense. Conclusion I doubt that anyone agrees with presenting a design that looks broken or unloved—and I’m not advocating for that—but websites need not look the same in every browser. We can use substitutions to present a simpler design to people using less capable browsers. The decision when to start using new CSS properties isn‘t always a technical one. Sometimes a change in attitude about browser support is all that’s required. So get tough with dumb browsers and benefit from all the advantages that CSS Custom Properties offer. Get hardboiled. Resources: It’s Time To Start Using CSS Custom Properties—Smashing Magazine Using CSS variables correctly—Mike Riethmuller Developing Inspired Guides with CSS Custom Properties (variables)—Andy Clarke 2017 Andy Clarke andyclarke 2017-12-13T00:00:00+00:00 https://24ways.org/2017/getting-hardboiled-with-css-custom-properties/ code
307 Get the Balance Right: Responsive Display Text Last year in 24 ways I urged you to Get Expressive with Your Typography. I made the case for grabbing your readers’ attention by setting text at display sizes, that is to say big. You should consider very large text in the same way you might a hero image: a picture that creates an atmosphere and anchors your layout. When setting text to be read, it is best practice to choose body and subheading sizes from a pre-defined scale appropriate to the viewport dimensions. We set those sizes using rems, locking the text sizes together so they all scale according to the page default and your reader’s preferences. You can take the same approach with display text by choosing larger sizes from the same scale. However, display text, as defined by its purpose and relative size, is text to be seen first, and read second. In other words a picture of text. When it comes to pictures, you are likely to scale all scene-setting imagery - cover photos, hero images, and so on - relative to the viewport. Take the same approach with display text: lock the size and shape of the text to the screen or browser window. Introducing viewport units With CSS3 came a new set of units which are locked to the viewport. You can use these viewport units wherever you might otherwise use any other unit of length such as pixels, ems or percentage. There are four viewport units, and in each case a value of 1 is equal to 1% of either the viewport width or height as reported in reference1 pixels: vw - viewport width, vh - viewport height, vmin - viewport height or width, whichever is smaller vmax - viewport height or width, whichever is larger In one fell swoop you can set the size of a display heading to be proportional to the screen or browser width, rather than choosing from a scale in a series of media queries. The following makes the heading font size 13% of the viewport width: h1 { font-size: 13 vw; } So for a selection of widths, the rendered font size would be: Rendered font size (px) Viewport width 13 vw 320 42 768 100 1024 133 1280 166 1920 250 A problem with using vw in this manner is the difference in text block proportions between portrait and landscape devices. Because the font size is based on the viewport width, the text on a landscape display is far bigger than when rendered on the same device held in a portrait orientation. Landscape text is much bigger than portrait text when using vw units. The proportions of the display text relative to the screen are so dissimilar that each orientation has its own different character, losing the inconsistency and considered design you would want when designing to make an impression. However if the text was the same size in both orientations, the visual effect would be much more consistent. This where vmin comes into its own. Set the font size using vmin and the size is now set as a proportion of the smallest side of the viewport, giving you a far more consistent rendering. h1 { font-size: 13vmin; } Landscape text is consistent with portrait text when using vmin units. Comparing vw and vmin renderings for various common screen dimensions, you can see how using vmin keeps the text size down to a usable magnitude: Rendered font size (px) Viewport 13 vw 13 vmin 320 × 480 42 42 414 × 736 54 54 768 × 1024 100 100 1024 × 768 133 100 1280 × 720 166 94 1366 × 768 178 100 1440 × 900 187 117 1680 × 1050 218 137 1920 × 1080 250 140 2560 × 1440 333 187 Hybrid font sizing Using vertical media queries to set text in direct proportion to screen dimensions works well when sizing display text. In can be less desirable when sizing supporting text such as sub-headings, which you may not want to scale upwards at the same rate as the display text. For example, we can size a subheading using vmin so that it starts at 16 px on smaller screens and scales up in the same way as the main heading: h1 { font-size: 13vmin; } h2 { font-size: 5vmin; } Using vmin alone for supporting text can scale it too quickly The balance of display text to supporting text on the phone works well, but the subheading text on the tablet, even though it has been increased in line with the main heading, is starting to feel disproportionately large and a little clumsy. This problem becomes magnified on even bigger screens. A solution to this is use a hybrid method of sizing text2. We can use the CSS calc() function to calculate a font size simultaneously based on both rems and viewport units. For example: h2 { font-size: calc(0.5rem + 2.5vmin); } For a 320 px wide screen, the font size will be 16 px, calculated as follows: (0.5 × 16) + (320 × 0.025) = 8 + 8 = 16px For a 768 px wide screen, the font size will be 27 px: (0.5 × 16) + (768 × 0.025) = 8 + 19 = 27px This results in a more balanced subheading that doesn’t take emphasis away from the main heading: To give you an idea of the effect of using a hybrid approach, here’s a side-by-side comparison of hybrid and viewport text sizing: table.ex--scale{width:100%;overflow: hidden;} table.ex--scale td{vertical-align:baseline;text-align:center;padding:0} tr.ex--scale-key{color:#666} tr.ex--scale-key td{font-size:.875rem;padding:0 0.125em} .ex--scale-2 tr.ex--scale-size{color:#ccc} tr.ex--scale-size td{font-size:1em;line-height:.34em;padding-bottom:.5rem} td.ex--scale-step{color:#000} td.ex--scale-hilite{color:red} .ex--scale-3 tr.ex--scale-size td{line-height:.9em} top: calc() hybrid method; bottom: vmin only 16 20 27 32 35 40 44 16 24 38 48 54 64 72 320 480 768 960 1080 1280 1440 Over this festive period, try experiment with the proportion of rem and vmin in your hybrid calculation to see what feels best for your particular setting. A reference pixel is based on the logical resolution of a device which takes into account double density screens such as Retina displays. ↩︎ For even more sophisticated uses of hybrid text sizing see the work of Mike Riethmuller. ↩︎ 2016 Richard Rutter richardrutter 2016-12-09T00:00:00+00:00 https://24ways.org/2016/responsive-display-text/ code
163 Get To Grips with Slippy Maps Online mapping has definitely hit mainstream. Google Maps made ‘slippy maps’ popular and made it easy for any developer to quickly add a dynamic map to his or her website. You can now find maps for store locations, friends nearby, upcoming events, and embedded in blogs. In this tutorial we’ll show you how to easily add a map to your site using the Mapstraction mapping library. There are many map providers available to choose from, each with slightly different functionality, design, and terms of service. Mapstraction makes deciding which provider to use easy by allowing you to write your mapping code once, and then easily switch providers. Assemble the pieces Utilizing any of the mapping library typically consists of similar overall steps: Create an HTML div to hold the map Include the Javascript libraries Create the Javascript Map element Set the initial map center and zoom level Add markers, lines, overlays and more Create the Map Div The HTML div is where the map will actually show up on your page. It needs to have a unique id, because we’ll refer to that later to actually put the map here. This also lets you have multiple maps on a page, by creating individual divs and Javascript map elements. The size of the div also sets the height and width of the map. You set the size using CSS, either inline with the element, or via a CSS reference to the element id or class. For this example, we’ll use inline styling. <div id="map" style="width: 400px; height: 400px;"></div> Include Javascript libraries A mapping library is like any Javascript library. You need to include the library in your page before you use the methods of that library. For our tutorial, we’ll need to include at least two libraries: Mapstraction, and the mapping API(s) we want to display. Our first example we’ll use the ubiquitous Google Maps library. However, you can just as easily include Yahoo, MapQuest, or any of the other supported libraries. Another important aspect of the mapping libraries is that many of them require an API key. You will need to agree to the terms of service, and get an API key these. <script src="http://maps.google.com/maps?file=api&v=2&key=YOUR_KEY" type="text/javascript"></script> <script type="text/javascript" src="http://mapstraction.com/src/mapstraction.js"></script> Create the Map Great, we’ve now put in all the pieces we need to start actually creating our map. This is as simple as creating a new Mapstraction object with the id of the HTML div we created earlier, and the name of the mapping provider we want to use for this map. With several of the mapping libraries you will need to set the map center and zoom level before the map will appear. The map centering actually triggers the initialization of the map. var mapstraction = new Mapstraction('map','google'); var myPoint = new LatLonPoint(37.404,-122.008); mapstraction.setCenterAndZoom(myPoint, 10); A note about zoom levels. The setCenterAndZoom function takes two parameters, the center as a LatLonPoint, and a zoom level that has been defined by mapping libraries. The current usage is for zoom level 1 to be “zoomed out”, or view the entire earth – and increasing the zoom level as you zoom in. Typically 17 is the maximum zoom, which is about the size of a house. Different mapping providers have different quality of zoomed in maps over different parts of the world. This is a perfect reason why using a library like Mapstraction is very useful, because you can quickly change mapping providers to accommodate users in areas that have bad coverage with some maps. To switch providers, you just need to include the Javascript library, and then change the second parameter in the Mapstraction creation. Or, you can call the switch method to dynamically switch the provider. So for Yahoo Maps (demo): var mapstraction = new Mapstraction('map','yahoo'); or Microsoft Maps (demo): var mapstraction = new Mapstraction('map','microsoft'); want a 3D globe in your browser? try FreeEarth (demo): var mapstraction = new Mapstraction('map','freeearth'); or even OpenStreetMap (free your data!) (demo): var mapstraction = new Mapstraction('map','openstreetmap'); Visit the Mapstraction multiple map demo page for an example of how easy it is to have many maps on your page, each with a different provider. Adding Markers While adding your first map is fun, and you can probably spend hours just sliding around, the point of adding a map to your site is usually to show the location of something. So now you want to add some markers. There are a couple of ways to add to your map. The simplest is directly creating markers. You could either hard code this into a rather static page, or dynamically generate these using whatever tools your site is built on. var marker = new Marker( new LatLonPoint(37.404,-122.008) ); marker.setInfoBubble("It's easy to add maps to your site"); mapstraction.addMarker( marker ); There is a lot more you can do with markers, including changing the icon, adding timestamps, automatically opening the bubble, or making them draggable. While it is straight-forward to create markers one by one, there is a much easier way to create a large set of markers. And chances are, you can make it very easy by extending some data you already are sharing: RSS. Specifically, using GeoRSS you can easily add a large set of markers directly to a map. GeoRSS is a community built standard (like Microformats) that added geographic markup to RSS and Atom entries. It’s as simple as adding <georss:point>42 -83</georss:point> to your feeds to share items via GeoRSS. Once you’ve done that, you can add that feed as an ‘overlay’ to your map using the function: mapstraction.addOverlay("http://api.flickr.com/services/feeds/groups_pool.gne?id=322338@N20&format=rss_200&georss=1"); Mapstraction also supports KML for many of the mapping providers. So it’s easy to add various data sources together with your own data. Check out Mapufacture for a growing index of available GeoRSS feeds and KML documents. Play with your new toys Mapstraction offers a lot more functionality you can utilize for demonstrating a lot of geographic data on your website. It also includes geocoding and routing abstraction layers for making sure your users know where to go. You can see more on the Mapstraction website: http://mapstraction.com. 2007 Andrew Turner andrewturner 2007-12-02T00:00:00+00:00 https://24ways.org/2007/get-to-grips-with-slippy-maps/ code
7 Get Started With GitHub Pages (Plus Bonus Jekyll) After several failed attempts at getting set up with GitHub Pages, I vowed that if I ever figured out how to do it, I’d write it up. Fortunately, I did eventually figure it out, so here is my write-up. Why I think GitHub Pages is cool Normally when you host stuff on GitHub, you’re just storing your files there. If you push site files, what you’re storing is the code, and when you view a file, you’re viewing the code rather than the output. What GitHub Pages lets you do is store those files, and if they’re HTML files, you can view them like any other website, so there’s no need to host them separately yourself. GitHub Pages accepts static HTML but can’t execute languages like PHP, or use a database in the way you’re probably used to, so you’ll need to output static HTML files. This is where templating tools such as Jekyll come in, which I’ll talk about later. The main benefit of GitHub Pages is ease of collaboration. Changes you make in the repository are automatically synced, so if your site’s hosted on GitHub, it’s as up-to-date as your GitHub repository. This really appeals to me because when I just want to quickly get something set up, I don’t want to mess around with hosting; and when people submit a pull request, I want that change to be visible as soon as I merge it without having to set up web hooks. Before you get started If you’ve used GitHub before, already have an account and know the basics like how to set up a repository and clone it to your computer, you’re good to go. If not, I recommend getting familiar with that first. The GitHub site has extensive documentation on getting started, and if you’re not a fan of using the command line, the official GitHub apps for Mac and Windows are great. I also found this tutorial about GitHub Pages by Thinkful really useful, and it contains details on how to turn an existing repository into a GitHub Pages site. Although this involves a bit of using the command line, it’s minimal, and I’ll guide you through the basics. Setting up GitHub Pages For this demo we’re going to build a Christmas recipe site — nothing complex, just a place to store recipes so we can share them with people, and they can fork or suggest changes to ones they like. My GitHub username is maban, and the project I’ve set up is called christmas-recipes, so once I’ve set up GitHub Pages, the site can be found here: http://maban.github.io/christmas-recipes/ You can set up a custom domain, but by default, the URL for your GitHub Pages site is your-username.github.io/your-project-name. Set up the repository The first thing we’re going to do is create a new GitHub repository, in exactly the same way as normal, and clone it to the computer. Let’s give it the name christmas-recipes. There’s nothing in it at the moment, but that’s OK. After setting up the repository on the GitHub website, I cloned it to my computer in my Sites folder using the GitHub app (you can clone it somewhere else, if you want), and now I have a local repository synced with the remote one on GitHub. Navigate to the repository Now let’s open up the command line and navigate to the local repository. The easiest way to do this in Terminal is by typing cd and dragging and dropping the folder into the terminal window and pressing Return. You can refer to Chris Coyier’s GIF illustrating this very same thing, from last week’s 24 ways article “Grunt for People Who Think Things Like Grunt are Weird and Hard” (which is excellent). So, for me, that’s… cd /Users/Anna/Sites/christmas-recipes Create a special GitHub Pages branch So far we haven’t done anything different from setting up a regular repository, but here’s where things change. Now we’re in the right place, let’s create a gh-pages branch. This tells GitHub that this is a special branch, and to treat the contents of it differently. Make sure you’re still in the christmas-recipes directory, and type this command to create the gh-pages branch: git checkout --orphan gh-pages That --orphan option might be new to you. An orphaned branch is an empty branch that’s disconnected from the branch it was created off, and it starts with no commits, making it a special standalone branch. checkout switches us from the branch we were on to that branch. If all’s gone well, we’ll get a message saying Switched to a new branch ‘gh-pages’. You may get an error message saying you don’t have admin privileges, in which case you’ll need to type sudo at the start of that command. Make gh-pages your default branch (optional) The gh-pages branch is separate to the master branch, but by default, the master branch is what will show up if we go to our repository’s URL on GitHub. To change this, go to the repository settings and select gh-pages as the default branch. If, like me, you only want the one branch, you can delete the master branch by following Oli Studholme’s tutorial. It’s actually really easy to do, and means you only have to worry about keeping one branch up to date. If you prefer to work from master but push updates to the gh-pages branch, Lea Verou has written up a quick tutorial on how to do this, and it basically involves working from the master branch, and using git rebase to bring one branch up to date with another. At the moment, we’ve only got that branch on the local machine, and it’s empty, so to be able to see something at our special GitHub Pages URL, we’ll need to create a page and push it to the remote repository. Make a page Open up your favourite text editor, create a file called index.html in your christmas-recipes folder, and put some exciting text in it. Don’t worry about the markup: all we need is text because right now we’re just checking it works. Now, let’s commit and push our changes. You can do that in the command line if you’re comfortable with that, or you can do it via the GitHub app. Don’t forget to add a useful commit message. Now we’re ready to see our gorgeous new site! Go to your-username.github.io/your-project-name and, hopefully, you’ll see your first GitHub Pages site. If not, don’t panic, it can take up to ten minutes to publish, so you could make a quick cake in a cup while you wait. After a short wait, our page should be live! Hopefully that wasn’t too traumatic. Now we know it works, we can add some proper markup and CSS and even some more pages. If you’re feeling brave, how about we take it to the next level… Setting up Jekyll Since GitHub Pages can’t execute languages like PHP, we need to give it static HTML files. This is fine if there are only a few pages, but soon we’ll start to miss things like PHP includes for content that’s the same on every page, like headers and footers. Jekyll helps set up templates and turn them into static HTML. It separates markup from content, and makes it a lot easier for people to edit pages collaboratively. With our recipe site, we want to make it really easy for people to fix typos or add notes, without having to understand PHP. Also, there’s the added benefit that static HTML pages load really fast. Jekyll isn’t officially supported on Windows, but it is still possible to run it if you’re prepared to get your hands dirty. Install Jekyll Back in Terminal, we’re going to install Jekyll… gem install jekyll …and wait for the script to run. This might take a few moments. It might take so long that you get worried its broken, but resist the urge to start mashing your keyboard like I did. Get Jekyll to run on the repository Fingers crossed nothing has gone wrong so far. If something did go wrong, don’t give up! Check this helpful post by Andy Taylor – you probably just need to install something else first. Now we’re going to tell Jekyll to set up a new project in the repository, which is in my Sites folder (yours may be in a different place). Remember, we can drag the directory into the terminal window after the command. jekyll new /Users/Anna/Sites/christmas-recipes If everything went as expected, we should get this error message: Conflict: /Users/Anna/Sites/christmas-recipes exists and is not empty. But that’s OK. It’s just upset because we’ve got that index.html file and possibly also a README.md in there that we made earlier. So move those onto your desktop for the moment and run the command again. jekyll new /Users/Anna/Sites/christmas-recipes It should say that the site has installed. Check you’re in the repository, and if you’re not, navigate to it by typing cd , drag the christmas-recipes directory into terminal… jekyll cd /Users/Anna/Sites/christmas-recipes …and type this command to tell Jekyll to run: jekyll serve --watch By adding --watch at the end, we’re forcing Jekyll to rebuild the site every time we hit Save, so we don’t have to keep telling it to update every time we want to view the changes. We’ll need to run this every time we start work on the project, otherwise changes won’t be applied. For now, wait while it does its thing. Update the config file When it’s finished, we’ll see the text press ctrl-c to stop. Don’t do that, though. Instead, open up the directory. You’ll notice some new files and folders in there. There’s one called _site, and that’s where all the site files are saved when they’re turned into static HTML. Don’t touch the files in here — they’re the generated files and will get overwritten every time we make changes to pages and layouts. There’s a file in our directory called _config.yml. This has some settings we can change, one of them being what our base URL is. GitHub Pages will assume the base URL is above the project repository, so changing the settings here will help further down the line when setting up navigation links. Replace the contents of the _config.yml file with this: name: Christmas Recipes markdown: redcarpet pygments: true baseurl: /christmas-recipes Set up your files Overwrite the index.html file in the root with the one we made earlier (you might want to pop the README.md back in there, too). Delete the files in the css folder — we’ll add our own later. View the Jekyll site Open up your favourite browser and type http://localhost:4000/christmas-recipes in the address bar. Check it out, that’s your site! But it could do with a bit more love. Set up the _includes files It’s always useful to be able to pull in snippets of content onto pages, such as the header and footer, so they only need to be updated in one place. That’s what an _includes folder is for in Jekyll. Create a folder in the root called _includes, and within it add two files: head.html and foot.html. In head.html, paste the following: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>{{ page.title }}</title> <link rel="stylesheet" href="{{ site.baseurl }}/css/main.css" > </head> <body> and in foot.html: </body> </html> Whenever we want to pull in something from the _includes folder, we can use {% include filename.html %} in the layout file — I’ll show you how to set that up in next step. Making layouts In our directory, there’s a folder called _layouts and this lets us create a reusable template for pages. Inside that is a default.html file. Delete everything in default.html and paste in this instead: {% include head.html %} <h1>{{ page.title }}</h1> {{ content }} {% include foot.html %} That’s a very basic page with a header, footer, page title and some content. To apply this template to a page, go back into the index.html page and add this snippet to the very top of the file: --- layout: default title: Home --- Now save the index.html file and hit Refresh in the browser. We should see a heading where {{ page.title }} was in the layout, which matches what comes after title: on the page itself (in this case, Home). So, if we wanted a subheading to appear on every page, we could add {{ page.subheading }} to where we want it to appear in our layout file, and a line that says subheading: This is a subheading in between the dashes at the top of the page itself. Using Markdown for templates Anything on a page that sits under the closing dashes is output where {{ content }} appears in the template file. At the moment, this is being output as HTML, but we can use Markdown instead, and Jekyll will convert that into HTML. For this recipe site, we want to make it as easy as possible for people to be able to collaborate, and also have the markup separate from the content, so let’s use Markdown instead of HTML for the recipes. Telling a page to use Markdown instead of HTML is incredibly simple. All we need to do is change the filename from .html to .md, so let’s rename the index.html to index.md. Now we can use Markdown, and Jekyll will output that as HTML. Create a new layout We’re going to create a new layout called recipe which is going to be the template for any recipe page we create. Let’s keep it super simple. In the _layouts folder, create a file called recipe.html and paste in this: {% include head.html %} <main> <h1>{{ page.title }}</h1> {{ content }} <p>Recipe by <a href="{{ page.recipe-attribution-link }}">{{ page.recipe-attribution }}</a>.</p> </main> {% include nav.html %} {% include foot.html %} That’s our template. The content that goes on the recipe layout includes a page title, the recipe content, a recipe attribution and a recipe attribution link. Adding some recipe pages Create a new file in the root of the christmas-recipes folder and call it gingerbread.md. Paste the following into it: --- layout: recipe title: Gingerbread recipe-attribution: HungryJenny recipe-attribution-link: http://www.opensourcefood.com/people/HungryJenny/recipes/soft-christmas-gingerbread-cookies --- Makes about 15 small cookies. ## Ingredients * 175g plain flour * 90g brown sugar * 50g unsalted butter, diced, at room temperature * 2 tbsp golden syrup * 1 egg, beaten * 1 tsp ground ginger * 1 tsp cinnamon * 1 tsp bicarbonate of soda * Icing sugar to dust ## Method 1. Sift the flour, ginger, cinnamon and bicarbonate of soda into a large mixing bowl. 2. Use your fingers to rub in the diced butter. Mix in the sugar. 3. Mix the egg with the syrup then pour into the flour mixture. Fold in well to form a dough. 4. Tip some flour onto the work surface and knead the dough until smooth. 5. Preheat the oven to 180°C. 6. Roll the dough out flat and use a shaped cutter to make as many cookies as you like. 7. Transfer the cookies to a tray and bake in the oven for 15 minutes. Lightly dust the cookies with icing sugar. The content is in Markdown, and when we hit Save, it’ll be converted into HTML in the _site folder. Save the file, and go to http://localhost:4000/christmas-recipes/gingerbread.html in your favourite browser. As you can see, the Markdown content has been converted into HTML, and the attribution text and link has been inserted in the right place. Add some navigation In your _includes folder, create a new file called nav.html. Here is some code that will generate your navigation: <nav class="nav-primary" role="navigation" > <ul> {% for p in site.pages %} <li> <a {% if p.url == page.url %}class="active"{% endif %} href="{{ site.baseurl }}{{ p.url }}">{{ p.title }}</a> </li> {% endfor %} </ul> </nav> This is going to look for all pages and generate a list of them, and give the navigation item that is currently active a class of active so we can style it differently. Now we need to include that navigation snippet in our layout. Paste {% include nav.html %} in default.html file, under {{ content }}. Push the changes to GitHub Pages Now we’ve got a couple of pages, it’s time to push our changes to GitHub. We can do this in exactly the same way as before. Check your special GitHub URL (your-username.github.io/your-project-name) and you should see your site up and running. If you quit Terminal, don’t forget to run jekyll serve --watch from within the directory the next time you want to work on the files. Next steps Now we know the basics of creating Jekyll templates and publishing them as GitHub Pages, we can have some fun adding more pages and styling them up. Here’s one I made earlier I’ve only been using Jekyll for a matter of weeks, mainly for prototyping. It’s really good as a content management system for blogs, and a lot of people host their Jekyll blogs on GitHub, such as Harry Roberts By hosting the code so openly it will make me take more pride in it and allow me to work on it much more easily; no excuses now! Overall, the documentation for Jekyll feels a little sparse and geared more towards blogs than other sites, but as more people discover the benefits of it, I’m sure this will improve over time. If you’re interested in poking about with some code, all the files from this tutorial are available on GitHub. 2013 Anna Debenham annadebenham 2013-12-18T00:00:00+00:00 https://24ways.org/2013/get-started-with-github-pages/  
151 Get In Shape Pop quiz: what’s wrong with the following navigation? Maybe nothing. But then again, maybe there’s something bugging you about the way it comes together, something you can’t quite put your finger on. It seems well-designed, but it also seems a little… off. The design decisions that led to this eventual form were no doubt well-considered: Client: The top level needs to have a “current page” status indicator of some sort. Designer: How about a white tab? Client: Great! The second level needs to show up underneath the first level though… Designer: Okay, but that white tab I just added makes it hard to visually connect the bottom nav to the top. Client: Too late, we’ve seen the white tab and we love it. Try and make it work. Designer: Right. So I placed the second level in its own box. Client: Hmm. They seem too separated. I can’t tell that the yellow nav is the second level of the first. Designer: How about an indicator arrow? Client: Brilliant! The problem is that the end result feels awkward and forced. During the design process, little decisions were made that ultimately affect the overall shape of the navigation. What started out as a neatly contained rounded rectangle ended up as an ambiguous double shape that looks funny, though it’s often hard to pinpoint precisely why. The Shape of Things Well the why in this case is because seemingly unrelated elements in a design still end up visually interacting. Adding a new item to a page impacts everything surrounding it. In this navigation example, we’re looking at two individual objects that are close enough to each other that they form a relationship; if we reduce them to strictly their outlines, it’s a little easier to see that this particular combination registers oddly. The two shapes float with nothing really grounding them. If they were connected, perhaps it would be a different story. The white tab divides the top shape in half, leaving a gap in the middle of it. There’s very little balance in this pairing because the overall shape of the navigation wasn’t considered during the design process. Here’s another example: Gmail. Great email client, but did you ever closely look at what’s going on in that left hand navigation? The continuous blue bar around the message area spills out into the navigation. If we remove all text, we’re left with this odd configuration: Though the reasoning for anchoring the navigation highlight against the message area might be sound, the result is an irregular shape that doesn’t correspond with anything in reality. You may never consciously notice it, but once you do it’s hard to miss. One other example courtesy of last.fm: The two header areas are the same shade of pink so they appear to be closely connected. When reduced to their outlines it’s easy to see that this combination is off-balance: the edges don’t align, the sharp corners of the top shape aren’t consistent with the rounded corners of the bottom, and the part jutting out on the right of the bottom one seems fairly random. The result is a duo of oddly mis-matched shapes. Design Strategies Our minds tend to pick out familiar patterns. A clever designer can exploit this by creating references in his or her work to shapes and combinations with which viewers are already familiar. There are a few simple ideas that can be employed to help you achieve this: consistency, balance, and completion. Consistency A fairly simple way to unify the various disparate shapes on a page is by designing them with a certain amount of internal consistency. You don’t need to apply an identical size, colour, border, or corner treatment to every single shape; devolving a design into boring repetition isn’t what we’re after here. But it certainly doesn’t hurt to apply a set of common rules to most shapes within your work. Consider purevolume and its multiple rounded-corner panels. From the bottom of the site’s main navigation to the grey “Extras” panels halfway down the page (shown above), multiple shapes use a common border radius for unity. Different colours, different sizes, different content, but the consistent outlines create a strong sense of similarity. Not that every shape on the site follows this rule; they break the pattern right at the top with a darker sharp-cornered header, and again with the thumbnails below. But the design remains unified, nonetheless. Balance Arguably the biggest problem with the last.fm example earlier is one of balance. The area poking out of the bottom shape created a fairly obvious imbalance for no apparent reason. The right hand side is visually emphasized due to the greater area of pink coverage, but with the white gap left beside it, the emphasis seems unwarranted. It’s possible to create tension in your design by mismatching shapes and throwing off the balance, but when that happens unintentionally it can look like a mistake. Above are a few examples of design elements in balanced and unbalanced configurations. The examples in the top row are undeniably more pleasing to the eye than those in the bottom row. If these were fleshed out into full designs, those derived from the templates in the top row would naturally result in stronger work. Take a look at the header on 9Rules for a study in well-considered balance. On the left you’ll see a couple of paragraphs of text, on the right you have floating navigational items, and both flank the site’s logo. This unusual layout combines multiple design elements that look nothing alike, and places them together in a way that anchors each so that no one weighs down the header. Completion And finally we come to the idea of completion. Shapes don’t necessarily need hard outlines to be read visually as shapes, which can be exploited for various purposes. Notice how Zend’s mid-page “Business Topics” and “News” items (below) fade out to the right and bottom, but the placement of two of these side-by-side creates an impression of two panels rather than three disparate floating columns. By allowing the viewer’s eye to complete the shapes, they’ve lightened up the design of the page and removed inessential lines. In a busy design this technique could prove quite handy. Along the same lines, the individual shapes within your design may also be combined visually to form outlines of larger shapes. The differently-coloured header and main content/sidebar shapes on Veerle’s blog come together to form a single central panel, further emphasized by the slight drop shadow to the right. Implementation Studying how shape can be used effectively in design is simply a starting point. As with all things design-related, there are no hard and fast rules here; ultimately you may choose to bring these principles into your work more often, or break them for effect. But understanding how shapes interact within a page, and how that effect is ultimately perceived by viewers, is a key design principle you can use to impress your friends. 2007 Dave Shea daveshea 2007-12-16T00:00:00+00:00 https://24ways.org/2007/get-in-shape/ design
53 Get Expressive with Your Typography In 1955 Beatrice Warde, an American communicator on typography, published a series of essays entitled The Crystal Goblet in which she wrote, “People who love ideas must have a love of words. They will take a vivid interest in the clothes that words wear.” And with that proposition Warde introduced the idea that just as we judge someone based on the clothes they are wearing, so we make judgements about text based on the typefaces in which it is set. Beatrice Warde. ©1970 Monotype Imaging Inc. Choosing the same typeface as everyone else, especially if you’re trying to make a statement, is like turning up to a party in the same dress; to a meeting in the same suit, shirt and tie; or to a craft ale dispensary in the same plaid shirt and turned-up skinny jeans. But there’s more to your choice of typeface than simply making an impression. In 2012 Jon Tan wrote on 24 ways about a scientific study called “The Aesthetics of Reading” which concluded that “good quality typography is responsible for greater engagement during reading and thus induces a good mood.” Furthermore, at this year’s Ampersand conference Sarah Hyndman, an expert in multisensory typography, discussed how typefaces can communicate with our subconscious. Sarah showed that different fonts could have an effect on how food tasted. A rounded font placed near a bowl of jellybeans would make them taste sweeter, and a jagged angular font would make them taste more sour. The quality of your typography can therefore affect the mood of your reader, and your font choice directly affect the senses. This means you can manipulate the way people feel. You can change their emotional state through type alone. Now that’s a real superpower! The effects of your body text design choices are measurable but subtle. If you really want to have an impact you need to think big. Literally. Display text and headings are your attention grabbers. They are your chance to interrupt, introduce and seduce. Display text and headings set the scene and draw people in. Text set large creates an image that visitors see before they read, and that’s your chance to choose a typeface that immediately expresses what the text, and indeed the entire website, stands for. What expectations of the text do you want to set up? Youthful enthusiasm? Businesslike? Cutting-edge? Hipster? Sensible and secure? Fun and informal? Authoritarian? Typography conveys much more than just information. It imparts feeling, emotion and sentiment, and arouses preconceived ideas of trust, tone and content. Think about taking advantage of this by introducing impactful, expressive typography to your designs on the web. You can alter the way your reader feels, so what emotion do you want to provoke? Maybe you want them to feel inspired like this stop smoking campaign: helsenorge.no Perhaps they should be moved and intrigued, as with Makeshift magazine: mkshft.org Or calmly reassured: www.cleopatra-marina.gr Fonts also tap into the complex library of associations that we’ve been accumulating in our brains all of our lives. You build up these associations every time you see a font from the context that you see it in. All of us associate certain letterforms with topics, times and places. Retiro is obviously Spanish: Retiro by Typofonderie Bodoni and Eurostile used in this menu couldn’t be much more Italian: Bodoni and Eurostile, both designed in Italy To me, Clarendon gives a sense of the 1960s and 1970s. I’m not sure if that’s what Costa was going for, but that’s what it means to me: Costa coffee flier And Knockout and Gotham really couldn’t be much more American: Knockout and Gotham by Hoefler & Co When it comes to choosing your display typeface, the type designer Christian Schwartz says there are two kinds. First are the workhorse typefaces that will do whatever you want them to do. Helvetica, Proxima Nova and Futura are good examples. These fonts can be shaped in many different ways, but this also means they are found everywhere and take great skill and practice to work with in a unique and striking manner. The second kind of typeface is one that does most of the work for you. Like finely tailored clothing, it’s the detail in the design that adds interest. Setting headings in Bree rather than Helvetica makes a big difference to the tone of the article Such typefaces carry much more inherent character, but are also less malleable and harder to adapt to different contexts. Good examples are Marr Sans, FS Clerkenwell, Strangelove and Bree. Push the boat out Remember, all type can have an effect on the reader. Take advantage of that and allow your type to have its own vernacular and impact. Be expressive with your type. Don’t be too reverential, dogmatic – or ordinary. Be brave and push a few boundaries. Adapted from Web Typography a book in progress by Richard Rutter. 2015 Richard Rutter richardrutter 2015-12-04T00:00:00+00:00 https://24ways.org/2015/get-expressive-with-your-typography/ design
109 Geotag Everywhere with Fire Eagle A note from the editors: Since this article was written Yahoo! has retired the Fire Eagle service. Location, they say, is everywhere. Everyone has one, all of the time. But on the web, it’s taken until this year to see the emergence of location in the applications we use and build. The possibilities are broad. Increasingly, mobile phones provide SDKs to approximate your location wherever you are, browser extensions such as Loki and Mozilla’s Geode provide browser-level APIs to establish your location from the proximity of wireless networks to your laptop. Yahoo’s Brickhouse group launched Fire Eagle, an ambitious location broker enabling people to take their location from any of these devices or sources, and provide it to a plethora of web services. It enables you to take the location information that only your iPhone knows about and use it anywhere on the web. That said, this is still a time of location as an emerging technology. Fire Eagle stores your location on the web (protected by application-specific access controls), but to try and give an idea of how useful and powerful your location can be — regardless of the services you use now — today’s 24ways is going to build a bookmarklet to call up your location on demand, in any web application. Location Support on the Web Over the past year, the number of applications implementing location features has increased dramatically. Plazes and Brightkite are both full featured social networks based around where you are, whilst Pownce rolled in Fire Eagle support to allow geotagging of all the content you post to their microblogging service. Dipity’s beautiful timeline shows for you moving from place to place and Six Apart’s activity stream for Movable Type started exposing your movements. The number of services that hook into Fire Eagle will increase as location awareness spreads through the developer community, but you can use your location on other sites indirectly too. Consider Flickr. Now world renowned for their incredible mapping and places features, geotagging on Flickr started out as a grassroots extension of regular tagging. That same technique can be used to start rolling geotagging in any publishing platform you come across, for any kind of content. Machine-tags (geo:lat= and geo:lon=) and the adr and geo microformats can be used to enhance anything you write with location information. A crash course in avian inflammability Fire Eagle is a location store. A broker between services and devices which provide location and those which consume it. It’s a switchboard that controls which pieces of your location different applications can see and use, and keeps hidden anything you want kept private. A blog widget that displays your current location in public can be restricted to display just your current city, whilst a service that provides you with a list of the nearest ATMs will operate better with a precise street address. Even if your iPhone tells Fire Eagle exactly where you are, consuming applications only see what you want them to see. That’s important for users to realise that they’re in control, but also important for application developers to remember that you cannot rely on having super-accurate information available all the time. You need to build location aware applications which degrade gracefully, because users will provide fuzzier information — either through choice, or through less accurate sources. Application specific permissions are controlled through an OAuth API. Each application has a unique key, used to request a second, user-specific key that permits access to that user’s information. You store that user key and it remains valid until such a time as the user revokes your application’s access. Unlike with passwords, these keys are unique per application, so revoking the access rights of one application doesn’t break all the others. Building your first Fire Eagle app; Geomarklet Fire Eagle’s developer documentation can take you through examples of writing simple applications using server side technologies (PHP, Python). Here, we’re going to write a client-side bookmarklet to make your location available in every site you use. It’s designed to fast-track the experience of having location available everywhere on web, and show you how that can be really handy. Hopefully, this will set you thinking about how location can enhance the new applications you build in 2009. An oddity of bookmarklets Bookmarklets (or ‘favlets’, for those of an MSIE persuasion) are a strange environment to program in. Critically, you have no persistent storage available. As such, using token-auth APIs in a static environment requires you to build you application in a slightly strange way; authing yourself in advance and then hardcoding the keys into your script. Get started Before you do anything else, go to http://fireeagle.com and log in, get set up if you need to and by all means take a look around. Take a look at the mobile updaters section of the application gallery and perhaps pick out an app that will update Fire Eagle from your phone or laptop. Once that’s done, you need to register for an application key in the developer section. Head straight to /developer/create and complete the form. Since you’re building a standalone application, choose ‘Auth for desktop applications’ (rather than web applications), and select that you’ll be ‘accessing location’, not updating. At the end of this process, you’ll have two application keys, a ‘Consumer Key’ and a ‘Consumer Secret’, which look like these: Consumer Key luKrM9U1pMnu Consumer Secret ZZl9YXXoJX5KLiKyVrMZffNEaBnxnd6M These keys combined allow your application to make requests to Fire Eagle. Next up, you need to auth yourself; granting your new application permission to use your location. Because bookmarklets don’t have local storage, you can’t integrate the auth process into the bookmarklet itself — it would have no way of storing the returned key. Instead, I’ve put together a simple web frontend through which you can auth with your application. Head to Auth me, Amadeus!, enter the application keys you just generated and hit ‘Authorize with Fire Eagle’. You’ll be taken to the Fire Eagle website, just as in regular Fire Eagle applications, and after granting access to your app, be redirected back to Amadeus which will provide you your user tokens. These tokens are used in subsequent requests to read your location. And, skip to the end… The process of building the bookmarklet, making requests to Fire Eagle, rendering it to the page and so forth follows, but if you’re the impatient type, you might like to try this out right now. Take your four API keys from above, and drag the following link to your Bookmarks Toolbar; it contains all the code described below. Before you can use it, you need to edit in your own API keys. Open your browser’s bookmark editor and where you find text like ‘YOUR_CONSUMER_KEY_HERE’, swap in the corresponding key you just generated. Get Location Bookmarklet Basics To start on the bookmarklet code, set out a basic JavaScript module-pattern structure: var Geomarklet = function() { return ({ callback: function(json) {}, run: function() {} }); }; Geomarklet.run(); Next we’ll add the keys obtained in the setup step, and also some basic Fire Eagle support objects: var Geomarklet = function() { var Keys = { consumer_key: 'IuKrJUHU1pMnu', consumer_secret: 'ZZl9YXXoJX5KLiKyVEERTfNEaBnxnd6M', user_token: 'xxxxxxxxxxxx', user_secret: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' }; var LocationDetail = { EXACT: 0, POSTAL: 1, NEIGHBORHOOD: 2, CITY: 3, REGION: 4, STATE: 5, COUNTRY: 6 }; var index_offset; return ({ callback: function(json) {}, run: function() {} }); }; Geomarklet.run(); The Location Hierarchy A successful Fire Eagle query returns an object called the ‘location hierarchy’. Depending on the level of detail shared, the index of a particular piece of information in the array will vary. The LocationDetail object maps the array indices of each level in the hierarchy to something comprehensible, whilst the index_offset variable is an adjustment based on the detail of the result returned. The location hierarchy object looks like this, providing a granular breakdown of a location, in human consumable and machine-friendly forms. "user": { "location_hierarchy": [{ "level": 0, "level_name": "exact", "name": "707 19th St, San Francisco, CA", "normal_name": "94123", "geometry": { "type": "Point", "coordinates": [ - 0.2347530752, 67.232323] }, "label": null, "best_guess": true, "id": , "located_at": "2008-12-18T00:49:58-08:00", "query": "q=707%2019th%20Street,%20Sf" }, { "level": 1, "level_name": "postal", "name": "San Francisco, CA 94114", "normal_name": "12345", "woeid": , "place_id": "", "geometry": { "type": "Polygon", "coordinates": [], "bbox": [] }, "label": null, "best_guess": false, "id": 59358791, "located_at": "2008-12-18T00:49:58-08:00" }, { "level": 2, "level_name": "neighborhood", "name": "The Mission, San Francisco, CA", "normal_name": "The Mission", "woeid": 23512048, "place_id": "Y12JWsKbApmnSQpbQg", "geometry": { "type": "Polygon", "coordinates": [], "bbox": [] }, "label": null, "best_guess": false, "id": 59358801, "located_at": "2008-12-18T00:49:58-08:00" }, } In this case the first object has a level of 0, so the index_offset is also 0. Prerequisites To query Fire Eagle we call in some existing libraries to handle the OAuth layer and the Fire Eagle API call. Your bookmarklet will need to add the following scripts into the page: The SHA1 encryption algorithm The OAuth wrapper An extension for the OAuth wrapper The Fire Eagle wrapper itself When the bookmarklet is first run, we’ll insert these scripts into the document. We’re also inserting a stylesheet to dress up the UI that will be generated. If you want to follow along any of the more mundane parts of the bookmarklet, you can download the full source code. Rendering This bookmarklet can be extended to support any formatting of your location you like, but for sake of example I’m going to build three common formatters that you’ll find useful for common location scenarios: Sites which already ask for your location; and in publishing systems that accept tags or HTML mark-up. All the rendering functions are items in a renderers object, so they can be iterated through easily, making it trivial to add new formatting functions as your find new use cases (just add another function to the object). var renderers = { geotag: function(user) { if(LocationDetail.EXACT !== index_offset) { return false; } else { var coords = user.location_hierarchy[LocationDetail.EXACT].geometry.coordinates; return "geo:lat=" + coords[0] + ", geo:lon=" + coords[1]; } }, city: function(user) { if(LocationDetail.CITY < index_offset) { return false; } else { return user.location_hierarchy[LocationDetail.CITY - index_offset].name; } } You should always fail gracefully, and in line with catering to users who choose not to share their location precisely, always check that the location has been returned at the level you require. Geotags are expected to be precise, so if an exact location is unavailable, returning false will tell the rendering aspect of the bookmarklet to ignore the function altogether. These first two are quite simple, geotag returns geo:lat=-0.2347530752, geo:lon=67.232323 and city returns San Francisco, CA. This final renderer creates a chunk of HTML using the adr and geo microformats, using all available aspects of the location hierarchy, and can be used to geotag any content you write on your blog or in comments: html: function(user) { var geostring = ''; var adrstring = ''; var adr = []; adr.push('<p class="adr">'); // city if(LocationDetail.CITY >= index_offset) { adr.push( '\n <span class="locality">' + user.location_hierarchy[LocationDetail.CITY-index_offset].normal_name + '</span>,' ); } // county if(LocationDetail.REGION >= index_offset) { adr.push( '\n <span class="region">' + user.location_hierarchy[LocationDetail.REGION-index_offset].normal_name + '</span>,' ); } // locality if(LocationDetail.STATE >= index_offset) { adr.push( '\n <span class="region">' + user.location_hierarchy[LocationDetail.STATE-index_offset].normal_name + '</span>,' ); } // country if(LocationDetail.COUNTRY >= index_offset) { adr.push( '\n <span class="country-name">' + user.location_hierarchy[LocationDetail.COUNTRY-index_offset].normal_name + '</span>' ); } // postal if(LocationDetail.POSTAL >= index_offset) { adr.push( '\n <span class="postal-code">' + user.location_hierarchy[LocationDetail.POSTAL-index_offset].normal_name + '</span>,' ); } adr.push('\n</p>\n'); adrstring = adr.join(''); if(LocationDetail.EXACT === index_offset) { var coords = user.location_hierarchy[LocationDetail.EXACT].geometry.coordinates; geostring = '<p class="geo">' +'\n <span class="latitude">' + coords[0] + '</span>;' + '\n <span class="longitude">' + coords[1] + '</span>\n</p>\n'; } return (adrstring + geostring); } Here we check the availability of every level of location and build it into the adr and geo patterns as appropriate. Just as for the geotag function, if there’s no exact location the geo markup won’t be returned. Finally, there’s a rendering method which creates a container for all this data, renders all the applicable location formats and then displays them in the page for a user to copy and paste. You can throw this together with DOM methods and some simple styling, or roll in some components from YUI or JQuery to handle drawing full featured overlays. You can see this simple implementation for rendering in the full source code. Make the call With a framework in place to render Fire Eagle’s location hierarchy, the only thing that remains is to actually request your location. Having already authed through Amadeus earlier, that’s as simple as instantiating the Fire Eagle JavaScript wrapper and making a single function call. It’s a big deal that whilst a lot of new technologies like OAuth add some complexity and require new knowledge to work with, APIs like Fire Eagle are really very simple indeed. return { run: function() { insert_prerequisites(); setTimeout( function() { var fe = new FireEagle( Keys.consumer_key, Keys.consumer_secret, Keys.user_token, Keys.user_secret ); var script = document.createElement('script'); script.type = 'text/javascript'; script.src = fe.getUserUrl( FireEagle.RESPONSE_FORMAT.json, 'Geomarklet.callback' ); document.body.appendChild(script); }, 2000 ); }, callback: function(json) { if(json.rsp && 'fail' == json.rsp.stat) { alert('Error ' + json.rsp.code + ": " + json.rsp.message); } else { index_offset = json.user.location_hierarchy[0].level; draw_selector(json); } } }; We first insert the prerequisite scripts required for the Fire Eagle request to function, and to prevent trying to instantiate the FireEagle object before it’s been loaded over the wire, the remaining instantiation and request is wrapped inside a setTimeout delay. We then create the request URL, referencing the Geomarklet.callback callback function and then append the script to the document body — allowing a cross-domain request. The callback itself is quite simple. Check for the presence and value of rsp.status to test for errors, and display them as required. If the request is successful set the index_offset — to adjust for the granularity of the location hierarchy — and then pass the object to the renderer. The result? When Geomarklet.run() is called, your location from Fire Eagle is read, and each renderer displayed on the page in an easily copy and pasteable form, ready to be used however you need. Deploy The final step is to convert this code into a long string for use as a bookmarklet. Easiest for Mac users is the JavaScript bundle in TextMate — choose Bundles: JavaScript: Copy as Bookmarklet to Clipboard. Then create a new ‘Get Location’ bookmark in your browser of choice and paste in. Those without TextMate can shrink their code down into a single line by first running their code through the JSLint tool (to ensure the code is free from errors and has all the required semi-colons) and then use a find-and-replace tool to remove line breaks from your code (or even run your code through JSMin to shrink it down). With the bookmarklet created and added to your bookmarks bar, you can now call up your location on any page at all. Get a feel for a web where your location is just another reliable part of the browsing experience. Where next? So, the Geomarklet you’ve been guided through is a pretty simple premise and pretty simple output. But from this base you can start to extend: Add code that will insert each of the location renderings directly into form fields, perhaps, or how about site-specific handlers to add your location tags into the correct form field in Wordpress or Tumblr? Paste in your current location to Google Maps? Or Flickr? Geomarklet gives you a base to start experimenting with location on your own pages and the sites you browse daily. The introduction of consumer accessible geo to the web is an adventure of discovery; not so much discovering new locations, but discovering location itself. 2008 Ben Ward benward 2008-12-21T00:00:00+00:00 https://24ways.org/2008/geotag-everywhere-with-fire-eagle/ code
111 Geometric Background Patterns When the design is finished and you’re about to start the coding process, you have to prepare your graphics. If you’re working with a pattern background you need to export only the repeating fragment. It can be a bit tricky to isolate a fragment to achieve a seamless pattern background. For geometric patterns there is a method I always follow and that I want to share with you. Take for example a perfect 45° diagonal line pattern. How do you define this pattern fragment so it will be rendered seamlessly? Here is the method I usually follow to avoid a mismatch. First, zoom in so you see enough detail and you can distinguish the pixels. Select the Rectangular Marquee Selection tool and start your selection at the intersection of 2 different colors of a diagonal line. Hold down the Shift key while dragging so you drag a perfect square. Release the mouse when you reach the exact same intesection (as your starting) point at the top right. Copy this fragment (using Copy Merged: Cmd/Ctrl + Shift + C) and paste the fragment in a new layer. Give this layer the name ‘pattern’. Now hold down the Command Key (Control Key on Windows) and click on the ‘pattern’ layer in the Layers Palette to select the fragment. Now go to Edit > Define Pattern, enter a name for your pattern and click OK. Test your pattern in a new document. Create a new document of 600 px by 400px, hit Cmd/Ctrl + A and go to Edit > Fill… and choose your pattern. If the result is OK, you have created a perfect pattern fragment. Below you see this pattern enlarged. The guides show the boundaries of the pattern fragment and the red pixels are the reference points. The red pixels at the top right, bottom right and bottom left should match the red pixel at the top left. This technique should work for every geometric pattern. Some patterns are easier than others, but this, and the Photoshop pattern fill test, has always been my guideline. Other geometric pattern examples Example 1 Not all geometric pattern fragments are squares. Some patterns look easy at first sight, because they look very repetitive, but they can be a bit tricky. Zoomed in pattern fragment with point of reference shown: Example 2 Some patterns have a clear repeating point that can guide you, such as the blue small circle of this pattern as you can see from this zoomed in screenshot: Zoomed in pattern fragment with point of reference shown: Example 3 The different diagonal colors makes a bit more tricky to extract the correct pattern fragment. The orange dot, which is the starting point of the selection is captured a few times inside the fragment selection: 2008 Veerle Pieters veerlepieters 2008-12-02T00:00:00+00:00 https://24ways.org/2008/geometric-background-patterns/ design