{"rowid": 271, "title": "Creating Custom Font Stacks with Unicode-Range", "contents": "Any web designer or front-end developer worth their salt will be familiar with the CSS @font-face rule used for embedding fonts in a web page. We\u2019ve all used it \u2014 either directly in our code ourselves, or via one of the web font services like Fontdeck, Typekit or Google Fonts.\n\nIf you\u2019re like me, however, you\u2019ll be used to just copying and pasting in a specific incantation of lines designed to get different formats of fonts working in different browsers, and may not have really explored all the capabilities of @font-face properties as defined by the spec.\n\nOne such property \u2014 the unicode-range descriptor \u2014 sounds pretty dull and is easily overlooked. It does, however, have some fairly interesting possibilities when put to use in creative ways.\n\nUnicode-range\n\nThe unicode-range descriptor is designed to help when using fonts that don\u2019t have full coverage of the characters used in a page. By adding a unicode-range property to a @font-face rule it is possible to specify the range of characters the font covers. \n\n@font-face {\n font-family: BBCBengali;\n src: url(fonts/BBCBengali.ttf) format(\"opentype\");\n unicode-range: U+00-FF;\n}\n\nIn this example, the font is to be used for characters in the range of U+00 to U+FF which runs from the unexciting control characters at the start of the Unicode table (symbols like the exclamation mark start at U+21) right through to \u00ff at U+FF \u2013 the extent of the Basic Latin character range.\n\nBy adding multiple @font-face rules for the same family but with different ranges, you can build up complete coverage of the characters your page uses by using different fonts.\n\nWhen I say that it\u2019s possible to specify the range of characters the font covers, that\u2019s true, but what you\u2019re really doing with the unicode-range property is declaring which characters the font should be used for. This becomes interesting, because instead of merely working with the technical constraints of available characters in a given font, we can start picking and choosing characters to use and selectively mix fonts together.\n\nThe best available ampersand\n\nA few years back, Dan Cederholm wrote a post encouraging designers to use the best available ampersand. Dan went on to outline how this can be achieved by wrapping our ampersands in a element with a class applied:\n\n&\n\nA CSS rule can then be written to select the and apply a different font:\n\nspan.amp {\n font-family: Baskerville, Palatino, \"Book Antiqua\", serif;\n}\n\nThat\u2019s a perfectly serviceable technique, but the drawbacks are clear \u2014 you have to add extra markup which is borderline presentational, and you also have to be able to add that markup, which isn\u2019t always possible when working with a CMS.\n\nPerhaps we could do this with unicode-range.\n\nA better best available ampersand\n\nThe Unicode code point for an ampersand is U+26, so the ampersand font stack above can be created like so:\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n\nWhat we\u2019ve done here is specify a new family called Ampersand and created a font stack for it with the user\u2019s locally installed copies of Baskerville, Palatino or Book Antiqua. We\u2019ve then limited it to a single character range \u2014 the ampersand. Of course, those don\u2019t need to be local fonts \u2014 they could be web font files, too. If you have a font with a really snazzy ampersand, go for your life.\n\nWe can then use that new family in a regular font stack.\n\nh1 {\n font-family: Ampersand, Arial, sans-serif;\n}\n\nWith this in place, any

elements in our page will use the Ampersand family (Baskerville, Palatino or Book Antiqua) for ampersands, and Arial for all other characters. If the user doesn\u2019t have any of the Ampersand family fonts available, the ampersand will fall back to the next item in the font stack, Arial.\n\nYou didn\u2019t think it was that easy, did you?\n\nOh, if only it were so. The problem comes, as ever, with the issue of browser support. The unicode-range property has good support in WebKit browsers (like Safari and Chrome, and the browsers on most popular smartphone platforms) and in recent versions of Internet Explorer. The big stumbling block comes in the form of Firefox, which has no support at all.\n\nIf you\u2019re familiar with how CSS works when it comes to unsupported properties, you\u2019ll know that if a browser encounters a property it doesn\u2019t implement, it just skips that declaration and moves on to the next. That works perfectly for things like border-radius \u2014 if the browser can\u2019t round off the corners, the declaration is skipped and the user sees square corners instead. Perfect.\n\nLess perfect when it comes to unicode-range, because if no range is specified then the default is that the font is applied for all characters \u2014 the whole range. If you\u2019re using a fancy font for flamboyant ampersands, you probably don\u2019t want that applied to all your text if unicode-range isn\u2019t supported. That would be bad. Really bad.\n\nEnsuring good fallbacks\n\nAs ever, the trick is to make sure that there\u2019s a sensible fallback in place if a browser doesn\u2019t have support for whatever technology you\u2019re trying to use. This is where being a super nerd about understanding the spec you\u2019re working with really pays off.\n\nWe can make use of the rules of the CSS cascade to make sure that if unicode-range isn\u2019t supported we get a sensible fallback font. What would be ideal is if we were able to follow up the @font-face rule with a second rule to override it if Unicode ranges aren\u2019t implemented.\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n@font-face {\n font-family: 'Ampersand';\n src: local('Arial');\n}\n\nIn theory, this code should make sense for all browsers. For those that support unicode-range the two rules become cumulative. They specify different ranges for the same family, and in WebKit browsers this has the expected result of using Arial for most characters, but Baskerville and friends for the ampersand. For browsers that don\u2019t have support, the second rule should just supersede the first, setting the font to Arial. \n\nUnfortunately, this code causes current versions of Firefox to freak out and use the first rule, applying Baskerville to the entire range. That\u2019s both unexpected and unfortunate. Bad Firefox. On your rug.\n\nIf that doesn\u2019t work, what can we do? Well, we know that if given a unicode-range Firefox will ignore the range and apply the font to all characters. That\u2019s really what we\u2019re trying to achieve. So what if we specified a range for the fallback font, but made sure it only covers some obscure high-value Unicode character we\u2019re never going to use in our page? Then it wouldn\u2019t affect the outcome for browsers that do support ranges.\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n@font-face {\n /* Ampersand fallback font */\n font-family: 'Ampersand';\n src: local('Arial');\n unicode-range: U+270C;\n}\n\nBy specifying a range on the fallback font, Firefox appears to correctly override the first based on the cascade sort order. Browsers that do support ranges take the second rule in addition, and apply Arial for that obscure character we\u2019re not using in any of our pages \u2014 U+270C.\n\nSo we get our nice ampersands in browsers that support unicode-range and, thanks to our styling of an obscure Unicode character, the font falls back to a perfectly acceptable Arial in browsers that do not offer support. Perfect!\n\nThat obscure character, my friends, is what Unicode defines as the VICTORY HAND.\n\n\u270c\n\nSo, how can we use this?\n\nAmpersands are a neat trick, and it works well in browsers that support ranges, but that\u2019s not really the point of all this. Styling ampersands is fun, but they\u2019re only really scratching the surface. Consider more involved examples, such as substituting a different font for numerals, or symbols, or even caps. Things certainly begin to get a bit more interesting.\n\nHow do you know what the codes are for different characters? Richard Ishida has a handy online conversion tool available where you can type in the characters and get the Unicode code points out the other end.\n\nOf course, the fact remains that browser support for unicode-range is currently limited, so any application needs to have fallbacks that you\u2019re still happy for a significant proportion of your visitors to see. In some cases, such as dedicated pages for mobile devices in an HTML-based phone app, this is immediately useful as support in WebKit browsers is already very good. In other cases, you\u2019ll have to use your own best judgement based on your needs and audience.\n\nOne thing to keep in mind is that if you\u2019re using web fonts, the entire font will be downloaded even if only one character is used. That said, the font shouldn\u2019t be downloaded if none of the characters within the Unicode range are present in a given page.\n\nAs ever, there are pros and cons to using unicode-range as well as varied but increasing support in browsers. It remains a useful tool to understand and have in your toolkit for when the right moment comes along.", "year": "2011", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2011-12-01T00:00:00+00:00", "url": "https://24ways.org/2011/creating-custom-font-stacks-with-unicode-range/", "topic": "code"} {"rowid": 280, "title": "Conditional Loading for Responsive Designs", "contents": "On the eighteenth day of last year\u2019s 24 ways, Paul Hammond wrote a great article called Speed Up Your Site with Delayed Content. He outlined a technique for loading some content \u2014 like profile avatars \u2014 after the initial page load. This gives you a nice performance boost.\n\nThere\u2019s another situation where this kind of delayed loading could be really handy: mobile-first responsive design.\n\nResponsive design combines three techniques:\n\n\n\ta fluid grid\n\tflexible images\n\tmedia queries\n\n\nAt first, responsive design was applied to existing desktop-centric websites to allow the layout to adapt to smaller screen sizes. But more recently it has been combined with another innovative approach called mobile first.\n\nRather then starting with the big, bloated desktop site and then scaling down for smaller devices, it makes more sense to start with the constraints of the small screen and then scale up for larger viewports. Using this approach, your layout grid, your large images and your media queries are applied on top of the pre-existing small-screen design. It\u2019s taking progressive enhancement to the next level.\n\nOne of the great advantages of the mobile-first approach is that it forces you to really focus on the core content of your page. It might be more accurate to think of this as a content-first approach. You don\u2019t have the luxury of sidebars or multiple columns to fill up with content that\u2019s just nice to have rather than essential.\n\nBut what happens when you apply your media queries for larger viewports and you do have sidebars and multiple columns? Well, you can load in that nice-to-have content using the same kind of Ajax functionality that Paul described in his article last year. The difference is that you first run a quick test to see if the viewport is wide enough to accommodate the subsidiary content. This is conditional delayed loading.\n\nConsider this situation: I\u2019ve published an article about cats and I\u2019d like to include relevant cat-related news items in the sidebar \u2026but only if there\u2019s enough room on the screen for a sidebar.\n\nI\u2019m going to use Google\u2019s News API to return the search results. This is the ideal time to use delayed loading: I don\u2019t want a third-party service slowing down the rendering of my page so I\u2019m going to fire off the request after my document has loaded.\n\nHere\u2019s an example of the kind of Ajax function that I would write:\n\nvar searchNews = function(searchterm) {\n\tvar elem = document.createElement('script');\n\telem.src = 'http://ajax.googleapis.com/ajax/services/search/news?v=1.0&q='+searchterm+'&callback=displayNews';\n\tdocument.getElementsByTagName('head')[0].appendChild(elem);\n};\n\nI\u2019ve provided a callback function called displayNews that takes the JSON result of that Ajax request and adds it an element on the page with the ID newsresults:\n\nvar displayNews = function(news) {\n\tvar html = '',\n\titems = news.responseData.results,\n\ttotal = items.length;\n\tif (total>0) {\n\t\tfor (var i=0; i';\n\t\t\thtml+= '';\n\t\t\thtml+= '

'+item.titleNoFormatting+'

';\n\t\t\thtml+= '
';\n\t\t\thtml+= '

';\n\t\t\thtml+= item.content;\n\t\t\thtml+= '

';\n\t\t\thtml+= '';\n\t\t}\n\t\tdocument.getElementById('newsresults').innerHTML = html;\n\t}\n};\n\nNow, I can call that function at the bottom of my document:\n\n\n\nIf I only want to run that search when there\u2019s room for a sidebar, I can wrap it in an if statement:\n\n\n\nIf the browser is wider than 640 pixels, that will fire off a search for news stories about cats and put the results into the newsresults element in my markup:\n\n
\n \n
\n\nThis works pretty well but I\u2019m making an assumption that people with small-screen devices wouldn\u2019t be interested in seeing that nice-to-have content. You know what they say about assumptions: they make an ass out of you and umptions. I should really try to give everyone at least the option to get to that extra content:\n\n\n\nSee the result\n\nVisitors with small-screen devices will see that link to the search results; visitors with larger screens will get the search results directly.\n\nI\u2019ve been concentrating on HTML and JavaScript, but this technique has consequences for content strategy and information architecture. Instead of thinking about possible page content in a binary way as either \u2018on the page\u2019 or \u2018not on the page\u2019, conditional loading introduces a third \u2018it\u2019s complicated\u2019 option.\n\nThis was just a simple example but I hope it illustrates that conditional loading could become an important part of the content-first responsive design approach.", "year": "2011", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2011-12-02T00:00:00+00:00", "url": "https://24ways.org/2011/conditional-loading-for-responsive-designs/", "topic": "ux"} {"rowid": 284, "title": "Subliminal User Experience", "contents": "The term \u2018user experience\u2019 is often used vaguely to quantify common elements of the interaction design process: wireframing, sitemapping and so on. UX undoubtedly involves all of these principles to some degree, but there really is a lot more to it than that.\n\nGood UX is characterized by providing the user with constant feedback as they step through your interface. It means thinking about and providing fallbacks and error resolutions in even the rarest of scenarios. It\u2019s about omitting clutter to make way for the necessary, and using the most fundamental of design tools to influence a user\u2019s path. It means making no assumptions, designing right down to the most distinct details and going one step further every single time. In many cases, good UX is completely subliminal.\n\nThere are simple tools and subtleties we can build into our products to enhance the overall experience but, in order to do so, we really have to step beyond where we usually draw the line on what to design.\n\nThe purpose of this article is not to provide technical how-tos, as the functionality is, in most cases, quite simple and could be implemented in a myriad of ways. Rather, it will present a handful of ideas for enhancing the experience of an interface at a deeper level of design without relying on the container.\n\nWe\u2019ll cover three elements that should get you thinking in the right mindset:\n\n\n\tprogress activity and post-active states\n\tpseudo-class preloading\n\tbuttons and their (mis)behaviour\n\n\nProgress activity and the post-active state\n\nWe\u2019ve long established that we can\u2019t control the devices our products are viewed on, which browser they\u2019ll run in or what connection speed will be used to access them. We accept this all as factual, so why is it so often left to the browser to provide feedback to the user when an event is triggered or an error encountered? The browser isn\u2019t part of the interface \u2014 it\u2019s merely a container. A simple, visual recognition of your users\u2019 activity may be all it takes to make or break the product.\n\nLet\u2019s begin with a commonly overlooked case: progress activity.\n\nA user moves their cursor over a hyperlink or button, which is clearly defined as one by the visual language of your content. Upon doing so, they trigger the :hover state to confirm this element is indeed interactive. So far, so good. What happens next is where it starts to fall apart: the user hits this link, presumably triggering an :active state, which is then returned to the normal state upon release. And then what?\n\n\n\nFrom this point on, your user is in limbo. The link has fallen back to either its regular or :visited state. You\u2019ve effectively abandoned them and are relying entirely on the browser they\u2019re using to communicate that something is happening. This poses quite a few problems:\n\n\n\tThe user may lose focus of what they were doing.\n\tThere is little consistency between progress indication in browsers.\n\tThe user may not even notice that their action has been acknowledged.\n\n\nHow many times have one or more of these events happened to you due to a lack of communication from the interface?\n\nThink about the differences between Safari and Chrome in this area \u2014 two browsers that, when compared to each other, are relatively similar in nature, though this basic feature differs in execution.\n\n\n\nLike all aspects of designing the user experience, there is no one true way to fix this problem, but we can introduce details that many users will unconsciously appreciate.\n\nConsider the basic loading indicator. It\u2019s nothing new \u2014 in fact, some would argue it\u2019s quite a clich\u00e9. However, whether using a spinning wheel or a progress bar, a gif or JavaScript, or something more sophisticated, these simple tools create an illusion of movement, progress and activity. Depending on the implementation, progress indication graphics can significantly increase a user\u2019s perception of the speed in which an event is taking place. Combine this with a cursor change and a lock over the element to prevent double-clicking or reloading, and your chances of keeping your user\u2019s valuable attention have significantly increased.\n\nDemo: Progress activity and the post-active state\n\nThis same logic applies to all aspects of defaulting in a browser, from micro-elements like this up to something as simple as a 404 page. The difference in a user\u2019s reaction to hitting the default Apache 404 and a hand-crafted, branded page are phenomenal and there are no prizes for guessing which one they\u2019re more likely to exit from.\n\nPseudo-class preloading\n\nAnother detail that it pays well to look after is the use and abuse of the :hover element and, more importantly, the content revealed by it. Chances are you\u2019re using the :hover pseudo-class somewhere in almost every screen you create. If content is being revealed on :hover and that content takes some time to load, there will inevitably be a delay the first time it is initiated. It appears tacky and half-finished when a tooltip or drop-down loads instantly, only to have its background or supporting elements follow through a second or two later. So, let\u2019s preload the elements we know we\u2019ll need.\n\nA very simple application of this would be to load each file into the default state of a visible element and offset them by a large number. This ensures our elements have loaded and are ready if and when they need to be displayed.\n\nelement {\n background: url(path/to/image.jpg) -9999em -9999em no-repeat;\n }\nelement .tooltip {\n display: none;\n }\nelement:hover .tooltip {\n display: block;\n background: url(path/to/image.jpg) 0 0;\n }\n\nBackground images are just one example. Of course, the same logic can apply to any form of revealed content. Using a sprite graphic can also be a clever \u2014 albeit tedious \u2014 method for achieving the same goal, so if you\u2019re using a sprite, preloading in this way may not be necessary\n\nThe differences between preloading and not can only be visualized properly with an actual demonstration.\n\nDemo: Preloading revealed content\n\nButtons and their (mis)behaviour\n\nAlmost all of the time, a button serves just one purpose: to be clicked (or tapped). When a button\u2019s pressed, therefore, if anything other than triggering the desired event occurs, a user naturally becomes frustrated. I often get funny looks when talking about this, but designing the details of a button is something I consider essential.\n\nIt goes without saying that a button should always visually recognise :hover and :active states. We can take that one step further and disable some actions that get in the way of pressing the button.\n\nIt\u2019s rare that a user would ever want to select and use the text on a button, so let\u2019s cleanly disable it:\n\nelement {\n -moz-user-select: -moz-none;\n -webkit-user-select: none;\n user-select: none;\n }\n\nIf the button is image-based or contains an image, we could also disable user dragging to make sure the image element stays locked to the button:\n\nelement {\n -moz-user-drag: -moz-none;\n -webkit-user-drag: none;\n user-drag: none;\n }\n\nDemo: A more usable button\n\nDisabling global features like this should be done with utmost caution as it\u2019s very easy to cross the line between enhancement and friction. Cases where this is acceptable are very rare, but it\u2019s a good trick to keep in mind nevertheless. Both Apple\u2019s iCloud and Metalab\u2019s Flow applications use these tools appropriately and to great extent.\n\nYou could argue that the visual feedback of having the text selected or image dragged when a user mis-hits the button is actually a positive effect, informing the user that their desired action did not work. However, covering for human error should be a designer\u2019s job, not that of our users. We can (almost) ensure it does work for them by accommodating for errors like this in most cases.\n\nFinal thoughts\n\nDesigning to this level of detail can seem obsessive, but as a designer and user of many interfaces and applications, I believe it can be the difference between a good user experience and a great one.\n\nThe samples you\u2019ve just seen are only a fraction of the detail we can design for. Keep in mind that the demonstrations, code and methods above outline just one way to do this. You may not agree with all of these processes or have the time and desire to consider them, but one fact remains: it\u2019s not the technology, or the way it\u2019s done that\u2019s important \u2014 it\u2019s the logic and the concept of designing everything.", "year": "2011", "author": "Chris Sealey", "author_slug": "chrissealey", "published": "2011-12-03T00:00:00+00:00", "url": "https://24ways.org/2011/subliminal-user-experience/", "topic": "ux"} {"rowid": 274, "title": "Adaptive Images for Responsive Designs", "contents": "So you\u2019ve been building some responsive designs and you\u2019ve been working through your checklist of things to do:\n\n\n\tYou started with the content and designed around it, with mobile in mind first.\n\tYou\u2019ve gone liquid and there\u2019s nary a px value in sight; % is your weapon of choice now.\n\tYou\u2019ve baked in a few media queries to adapt your layout and tweak your design at different window widths.\n\tYou\u2019ve made your images scale to the container width using the fluid Image technique.\n\tYou\u2019ve even done the same for your videos using a nifty bit of JavaScript.\n\n\nYou\u2019ve done a good job so pat yourself on the back. But there\u2019s still a problem and it\u2019s as tricky as it is important: image resolutions.\n\nHTML has an problem\n\nCSS is great at adapting a website design to different window sizes \u2013 it allows you not only to tweak layout but also to send rescaled versions of the design\u2019s images. And you want to do that because, after all, a smartphone does not need a 1,900-pixel background image1.\n\nHTML is less great. In the same way that you don\u2019t want CSS background images to be larger than required, you don\u2019t want that happening with s either. A smartphone only needs a small image but desktop users need a large one. Unfortunately s can\u2019t adapt like CSS, so what do we do?\n\nWell, you could just use a high resolution image and the fluid image technique would scale it down to fit the viewport; but that\u2019s sending an image five or six times the file size that\u2019s really needed, which makes it slow to download and unpleasant to use. Smartphones are pretty impressive devices \u2013 my ancient iPhone 3G is more powerful in every way than my first proper computer \u2013 but they\u2019re still terribly slow in comparison to today\u2019s desktop machines. Sending a massive image means it has to be manipulated in memory and redrawn as you scroll. You\u2019ll find phones rapidly run out of RAM and slow to a crawl.\n\nWell, OK. You went mobile first with everything else so why not put in mobile resolution s too? Because even though mobile devices are rapidly gaining share in your analytics stats, they\u2019re still not likely to be the major share of your user base. I don\u2019t think desktop users would be happy with pokey little mobile resolution images, do you? What we need are adaptive images.\n\nAdaptive image techniques\n\nThere are a number of possible solutions, each with pros and cons, and it\u2019s not as simple to find a graceful solution as you might expect.\n\nYour first thought might be to use JavaScript to trawl through the markup and rewrite the source attribute. That\u2019ll get you the right end result, but it\u2019ll have done it in a way you absolutely don\u2019t want. That\u2019s because of the way browsers load resources. It starts to load the HTML and builds the page on-the-fly; as soon as it finds an element it immediately asks the server for that image. After the HTML has finished loading, the JavaScript will run, change the src attribute, and then the browser will request that new image too. Not instead of, but as well as. Not good: that\u2019s added more bloat instead of cutting it.\n\nPlain JavaScript is out then, which is a problem, because what other tools do we have to work with as web designers? Let\u2019s ignore that for now and I\u2019ll outline another issue with the concept of serving different resolution images for different window widths: a basic file management problem. To request a different image, that image has to exist on the server. How\u2019s it going to get there? That\u2019s not a trivial problem, especially if you have non-technical users that update content through a CMS. Let\u2019s say you solve that \u2013 do you plan on a simple binary switch: big image|little image? Is that really efficient or future-proof? What happens if you have an archive of existing content that needs to behave this way? Can you apply such a solution to existing content or markup?\n\nThere\u2019s a detailed round-up of potential techniques for solving the adaptive images problem over at the Cloud Four blog if you fancy a dig around exploring all the options currently available. But I\u2019m here to show you what I think is the most flexible and easy to implement solution, so here we are.\n\nAdaptive Images\n\nAdaptive Images aims to mitigate most of the issues surrounding the problems of bringing the venerable tag into the 21st century. And it works by leaving that tag completely alone \u2013 just add that desktop resolution image into the markup as you\u2019ve been doing for years now. We\u2019ll fix it using secret magic techniques and bottled pixie dreams. Well, fine: with one .htaccess file, one small PHP file and one line of JavaScript. But you\u2019re killing the mystique with that kind of talk.\n\nSo, what does this solution do?\n\n\n\tIt allows s to adapt to the same break points you use in your media queries, giving granular control in the same way you get with your CSS.\n\tIt installs on your server in five minutes or less and after that is automatic and you don\u2019t need to do anything.\n\tIt generates its own rescaled images on the server and doesn\u2019t require markup changes, so you can apply it to existing web content.\n\tIf you wish, it will make all of your images go mobile-first (just in case that\u2019s what you want if JavaScript and cookies aren\u2019t available).\n\n\nSound good? I hope so. Here\u2019s what you do.\n\nSetting up and rolling out\n\nI\u2019ll assume you have some basic server knowledge along with that wealth of front-end wisdom exploding out of your head: that you know not to overwrite any existing .htaccess file for example, and how to set file permissions on your server. Feeling up to it? Excellent.\n\n\n\tDownload the latest version of Adaptive Images either from the website or from the GitHub repository.\n\tUpload the included .htaccess and adaptive-images.php files into the root folder of your website.\n\tCreate a directory called ai-cache and make sure the server can write to it (CHMOD 755 should do it).\n\tAdd the following line of JavaScript into the of your site:\n\n\n\n\nThat\u2019s it, unless you want to tweak the default settings. You likely do, but essentially you\u2019re already up and running.\n\nHow it works\n\nAdaptive Images does a number of things depending on the scenario the script has to handle, but here\u2019s a basic overview of what it does when you load a page running it:\n\n\n\tA session cookie is written with the value of the visitor\u2019s screen size in pixels.\n\tThe HTML encounters an tag and sends a request to the server for that image. It also sends the cookie, because that\u2019s how browsers work.\n\tApache sits on the server and receives the request for the image. Apache then has a look in the .htaccess file to see if there are any special instructions for files in the requested URL.\n\tThere are! The .htaccess says \u201cHey, server! Any request you get for a JPG, GIF or PNG file just send to the adaptive-images.php file instead.\u201d\n\tThe PHP file then does some intelligent thinking which can cover a number of scenarios, but I\u2019ll illustrate one path that can happen:\n\n\n\t\n\t\tThe PHP file looks for the cookie and finds out that the user has a maximum screen width of 480px.\n\t\tThe PHP has a look at the available media query sizes that were configured and decides which one matches the user\u2019s device.\n\t\tIt then has a look inside the /ai-cache/480/ folder to see if a rescaled image already exists there.\n\t\tWe\u2019ll pretend it doesn\u2019t \u2013 the PHP then goes to the actual requested URI and finds that the original file does exist.\n\t\tIt has a look to see how wide that image is. If it\u2019s already smaller than the user\u2019s screen width it sends it along and stops there. But, let\u2019s pretend the image is 1,000px wide.\n\t\tThe PHP then resizes the image and saves it into the /ai-cache/480 folder ready for the next time someone needs it.\n\t\n\nIt also does a few other things when needs arise, for example:\n\n\n\tIt sends images with a cache header field that tells proxies not to cache the image, while telling browsers they should. This avoids problems with proxy servers and network caching systems grabbing the wrong image and storing it.\n\tIt handles cases where there isn\u2019t a cookie set, and you can choose whether to then send the mobile version or the largest configured media query size.\n\tIt compares timestamps between the source image and the generated cache image \u2013 to ensure that if the source image gets updated, the old cached file won\u2019t be sent.\n\n\nCustomizing\n\nThere are a few options you can customize if you don\u2019t like the default values. By looking in the PHP\u2019s configuration section at the top of the file, you can:\n\n\n\tSet the resolution breakpoints to match your media query break points.\n\tChange the name and location of the ai-cache folder.\n\tChange the quality level any generated JPG images are saved at.\n\tHave it perform a subtle sharpen on rescaled images to help keep detail.\n\tToggle whether you want it to compare the files in the cache folder with the source ones or not.\n\tSet how long the browser should cache the images for.\n\tSwitch between a mobile-first or desktop-first approach when a cookie isn\u2019t found.\n\n\nMore importantly, you probably want to omit a few folders from the AI behaviour. You don\u2019t need or want it resizing the images you\u2019re using in your CSS, for example. That\u2019s fine \u2013 just open up the .htaccess file and follow the instructions to list any directories you want AI to ignore. Or, if you\u2019re a dab hand at RewriteRules you can remove the exclamation mark at the start of the rule and it\u2019ll only apply AI behaviour to a given list of folders.\n\nCaveats\n\nAs I mentioned, I think this is one of the most flexible, future-proof, retrofittable and easy to use solutions available today. But, there are problems with this approach as there are with all of the ones I\u2019ve seen so far.\n\nThis is a PHP solution\n\nI wish I was smarter and knew some fancy modern languages the cool kids discuss at parties, but I don\u2019t. So, you need PHP on your server. That said, Adaptive Images has a Creative Commons licence2 and I would welcome anyone to contribute a port of the code3. \n\nContent delivery networks\n\nAdaptive Images relies on the server being able to: intercept requests for images; do some logic; and send one of a given number of responses. Content delivery networks are generally dumb caches, and they won\u2019t allow that to happen. Adaptive Images will not work if you\u2019re using a CDN to deliver your website.\n\nA minor but interesting cookie issue.\n\nAs Yoav Weiss pointed out in his article Preloaders, cookies and race conditions, there is no way to guarantee that a cookie will be set before images are requested \u2013 even though the JavaScript that sets the cookie is loaded by the browser before it finds any tags. That could mean images being requested without a cookie being available. Adaptive Images has a two-fold mechanism to avoid this being a problem:\n\n\n\tThe $mobile_first toggle allows you to choose what to send to a browser if a cookie isn\u2019t set. If FALSE then it will send the highest configured resolution; if TRUE it will send the lowest.\n\tEven if set to TRUE, Adaptive Images checks the User Agent String. If it discovers the user is on a desktop environment, it will override $mobile_first and set it to FALSE.\n\n\nThis means that if $mobile_first is set to TRUE and the user was unlucky (their browser didn\u2019t write the cookie fast enough), mobile devices will be supplied with the smallest image, and desktop devices will get the largest.\n\nThe best way to get a cookie written is to use JavaScript as I\u2019ve explained above, because it\u2019s the fastest way. However, for those that want it, there is a JavaScript-free method which uses CSS and a bogus PHP \u2018image\u2019 to set the cookie. A word of caution: because it requests an external file, this method is slower than the JavaScript one, and it is very likely that the cookie won\u2019t be set until after images have been requested.\n\nThe future\n\nFor today, this is a pretty good solution. It works, and as it doesn\u2019t interfere with your markup or source material in any way, the process is non-destructive. If a future solution is superior, you can just remove the Adaptive Images files and you\u2019re good to go \u2013 you\u2019d never know AI had been there.\n\nHowever, this isn\u2019t really a long-term solution, not least because of the intermittent problem of the cookie and image request race condition. What we really need are a number of standardized ways to handle this in the future.\n\nFirst, we could do with browsers sending far more information about the user\u2019s environment along with each HTTP request (device size, connection speed, pixel density, etc.), because the way things work now is no longer fit for purpose. The web now is a much broader entity used on far more diverse devices than when these technologies were dreamed up, and we absolutely require the server to have better knowledge about device capabilities than is currently possible. Relying on cookies to do this job doesn\u2019t cut it, and the User Agent String is a complete mess incapable of fulfilling the various purposes we are forced to hijack it for.\n\nSecondly, we need a W3C-backed markup level solution to supply semantically different content at different resolutions, not just rescaled versions of the same content as Adaptive Images does.\n\nI hope you\u2019ve found this interesting and will find Adaptive Images useful.\n\nFootnotes\n\n1 While I\u2019m talking about preventing smartphones from downloading resources they don\u2019t need: you should be careful of your media query construction if you want to stop WebKit downloading all the images in all of the CSS files.\n\n2 Adaptive Images has a very broad Creative Commons licence and I warmly welcome feedback and community contributions via the GitHub repository. \n\n3 There is a ColdFusion port of an older version of Adaptive Images. I do not have anything to do with ported versions of Adaptive Images.", "year": "2011", "author": "Matt Wilcox", "author_slug": "mattwilcox", "published": "2011-12-04T00:00:00+00:00", "url": "https://24ways.org/2011/adaptive-images-for-responsive-designs/", "topic": "ux"} {"rowid": 266, "title": "Collaborative Development for a Responsively Designed Web", "contents": "In responsive web design we\u2019ve found a technique that allows us to design for the web as a medium in its own right: one that presents a fluid, adaptable and ever changing canvas.\n\nUntil this point, we gave little thought to the environment in which users will experience our work, caring more about the aggregate than the individual. The applications we use encourage rigid layouts, whilst linear processes focus on clients signing off paintings of websites that have little regard for behaviour and interactions. The handover of pristine, pixel-perfect creations to developers isn\u2019t dissimilar to farting before exiting a crowded lift, leaving front-end developers scratching their heads as they fill in the inevitable gaps. If you haven\u2019t already, I recommend reading Drew\u2019s checklist of things to consider before handing over a design.\n\nSomehow, this broken methodology has survived for the last fifteen years or so. Even the advent of web standards has had little impact. Now, as we face an onslaught of different devices, the true universality of the web can no longer be ignored.\n\nResponsive web design is just the thin end of the wedge. Largely concerned with layout, its underlying philosophy could ignite a trend towards interfaces that adapt to any number of different variables: input methods, bandwidth availability, user preference \u2013 you name it!\n\nWith such adaptability, a collaborative and iterative process is required. Ethan Marcotte, who worked with the team behind the responsive redesign of the Boston Globe website, talked about such an approach in his book:\n\n\n\tThe responsive projects I\u2019ve worked on have had a lot of success combining design and development into one hybrid phase, bringing the two teams into one highly collaborative group.\n\n\nWhilst their process still involved the creation of desktop-centric mock-ups, these were presented to the entire team early on, where questions about how pages might adapt and behave at different sizes were asked. Mock-ups were quickly converted into HTML prototypes, meaning further decisions could be based on usage rather than guesswork (and endless hours spent in Photoshop).\n\nRegardless of the exact process, it\u2019s clear that the relationship between our two disciplines is more crucial than ever. Yet, historically, it seems a wedge has been driven between us \u2013 perhaps a result of segregation and waterfall-style processes \u2013 resulting in animosity.\n\nSo how can we improve this relationship? Ultimately, we\u2019ll need to adapt, but even within existing workflows we can start to overlap. Simply adjusting our attitude can effect change, and bring design and development teams closer together.\n\n\n\tGood design is constant contact.\n\n\tMark Otto\n\n\nThe way we work needs to be more open and inclusive. For example, ensuring members of the development team attend initial kick-off meetings and design workshops will not only ensure technical concerns are raised, but mean that those implementing our designs better understand the problems we\u2019re trying to solve.\n\nIt can also be useful at this stage to explain how you work and the sort of deliverables you expect to produce. This will give developers a chance to make recommendations on how these can be optimized for their own needs.\n\nYou may even find opportunities to share the load. On a recent project I worked on, our development partners offered to produce the interactive prototypes needed for user testing. This allowed us to concentrate on refining the experience, whilst they were able to get a head start on building the product.\n\nWhile developers should be involved at the beginning of projects, it\u2019s also important that designers are able to review and contribute to a product as it\u2019s being built. Any handover should be done in person, and ideally you\u2019ll have a day set aside to do so. Having additional budget available for follow-up design reviews is also recommended. Learning how to use version control tools like Subversion or Git will allow you to work within the same environment as developers, and allow you to contribute code or graphic assets directly to a project if needed.\n\nDon\u2019t underestimate the benefits of designer and developer sitting next to each other. Subtle nuances can be explored far more easily than if they were conducted over email or phone. As Ethan writes, \u201c\u2018Design\u2019 is the means, not merely the end; the path we walk over the course of a project, the choices we make\u201d.\n\nIt\u2019s from collaboration like this that I\u2019ve become fond of producing visual style guides. These demonstrate typographic treatments for common markup and patterns (blockquotes, lists, pagination, basic form controls and so on). Thinking in terms of components rather than individual pages not only fits in better with how a developer will implement a site, but can also ensure your design works as a coherent whole.\n\nDespite the amount of research and design produced, when it comes to the crunch, there will always be a need for compromise. As the old saying goes, \u2018fast, cheap and good \u2013 pick two.\u2019 It\u2019s important that you know which pieces are crucial to a design and which areas can allow for movement. Pick your battles wisely. Having an agreed set of design principles can be useful when making such decisions, as they help everyone focus on the goals of the project.\n\n\n\tThe best compromises are reached when both sides understand the issues of the other.\n\n\tRichard Rutter\n\n\nUltimately, better collaboration comes through a shared understanding of the different competencies required to build a website. Instead of viewing ourselves in terms of discrete roles, we should instead look to emphasize our range of abilities, and work with others whose skills are complementary.\n\nPerhaps somebody who actively seeks to broaden their knowledge is the mark of a professional. Seek these people out.\n\nThe best developers I\u2019ve worked with have a respect for design, probably having attempted to do some themselves! Having wrangled with a few MySQL databases myself, I certainly believe the obverse is true. While knowing HTML won\u2019t necessarily make you a better designer, it will help you understand the issues being faced by a front-end developer and, more importantly, allow you to offer solutions or alternative approaches.\n\nSo take a moment to think about how you work with developers and how you could improve your relationship with them. What are you doing to ease the path towards our collaborative future?", "year": "2011", "author": "Paul Lloyd", "author_slug": "paulrobertlloyd", "published": "2011-12-05T00:00:00+00:00", "url": "https://24ways.org/2011/collaborative-development-for-a-responsively-designed-web/", "topic": "business"} {"rowid": 286, "title": "Defending the Perimeter Against Web Widgets", "contents": "On July 14, 1789, citizens of Paris stormed the Bastille, igniting a revolution that toppled the French monarchy. On July 14 of this year, there was a less dramatic (though more tweeted) takedown: The Deck network, which delivers advertising to some of the most popular web design and culture destinations, was down for about thirty minutes. During this period, most partner sites running ads from The Deck could not be viewed as result.\n\nA few partners were unaffected (aside from not having an ad to display). Fortunately, Dribbble, was one of them. In this article, I\u2019ll discuss outages like this and how to defend against them. But first, a few qualifiers: The Deck has been rock solid \u2013 this is the only downtime we\u2019ve witnessed since joining in June. More importantly, the issues in play are applicable to any web widget you might add to your site to display third-party content.\n\nDown and out\n\nYour defense is only as good as its weakest link. Web pages are filled with links, some of which threaten the ability of your page to load quickly and correctly. If you want your site to work when external resources fail, you need to identify the weak links on your site. In this article, we\u2019ll talk about web widgets as a point of failure and defensive JavaScript techniques for handling them.\n\nWidgets 101\n\nImagine a widget that prints out a Pun of the Day on your site. A simple technique for both widget provider and consumer is for the provider to expose a URL:\n\nhttp://widgetjonesdiary.com/punoftheday.js\n\nwhich returns a JavaScript file like this:\n\ndocument.write(\"

The Pun of the Day

Where do frogs go for beers after work? Hoppy hour!

\");\n\nThe call to document.write() injects the string passed into the document where it is called. So to display the widget on your page, simply add an external script tag where you want it to appear:\n\n
\n \n \n
\n\nThis approach is incredibly easy for both provider and consumer. But there are implications\u2026\n\ndocument.write()\u2026 or wrong?\n\nAs in the example above, scripts may perform a document.write() to inject HTML. Page rendering halts while a script is processed so any output can be inlined into the document. Therefore, page rendering speed depends on how fast the script returns the data. If an external JavaScript widget hangs, so does the page content that follows. It was this scenario that briefly stalled partner sites of The Deck last summer.\n\nThe elegant solution\n\nTo make our web widget more robust, calls to document.write() should be avoided. This can be achieved with a technique called JSONP (AKA JSON with padding). In our example, instead of writing inline with document.write(), a JSONP script passes content to a callback function:\n\npublishPun(\"

Pun of the Day

Where do frogs go for beers after work? Hoppy hour!

\");\n\nThen, it\u2019s up to the widget consumer to implement a callback function responsible for displaying the content. Here\u2019s a simple example where our callback uses jQuery to write the content into a target
:\n\n\n
\n\n\u2026\n\n\n
\nfunction publishPun(content) {\n $(‘.punoftheday’).html(content); // Writes content display location
\n}
\n\n\n\n\nView Example 1\n\nEven if the widget content appears at the top of the page, our script can be included at the bottom so it\u2019s non-blocking: a slow response leaves page rendering unaffected. It simply invokes the callback which, in turn, writes the widget content to its display destination.\n\nThe hack\n\nBut what to do if your provider doesn\u2019t support JSONP? This was our case with The Deck. Returning to our example, I\u2019m reminded of computer scientist David Wheeler\u2019s statement, \u201cAll problems in computer science can be solved by another level of indirection\u2026 Except for the problem of too many layers of indirection.\u201d\n\nIn our case, the indirection is to move the widget content into position after writing it to the page. This allows us to place the widget \n

Pun of the Day

\n

Where do frogs go for beers after work? Hoppy hour!

\n
\n\nThe \u2018loading-dock\u2019
now includes the widget content, albeit hidden from view (if we\u2019ve styled the \u2018hidden\u2019 class with display: none). There\u2019s just one more step: move the content to its display destination. This line of jQuery (from above) does the trick:\n\n$('.punoftheday').append($('.loading-dock').children(':gt(0)'));\n\nThis selects all child elements in the \u2018loading-doc\u2019
except the first \u2013 the widget \n\n\n\nPhwoar! Dirty, isn\u2019t it? I\u2019ll stop for a moment, so you can go have a wash.\n\nDone? Excellent.\n\nWith this, the image is wrapped in a comment only for users with JavaScript. Without JavaScript, we get the image. Unlike the