{"rowid": 271, "title": "Creating Custom Font Stacks with Unicode-Range", "contents": "Any web designer or front-end developer worth their salt will be familiar with the CSS @font-face rule used for embedding fonts in a web page. We\u2019ve all used it \u2014 either directly in our code ourselves, or via one of the web font services like Fontdeck, Typekit or Google Fonts.\n\nIf you\u2019re like me, however, you\u2019ll be used to just copying and pasting in a specific incantation of lines designed to get different formats of fonts working in different browsers, and may not have really explored all the capabilities of @font-face properties as defined by the spec.\n\nOne such property \u2014 the unicode-range descriptor \u2014 sounds pretty dull and is easily overlooked. It does, however, have some fairly interesting possibilities when put to use in creative ways.\n\nUnicode-range\n\nThe unicode-range descriptor is designed to help when using fonts that don\u2019t have full coverage of the characters used in a page. By adding a unicode-range property to a @font-face rule it is possible to specify the range of characters the font covers. \n\n@font-face {\n font-family: BBCBengali;\n src: url(fonts/BBCBengali.ttf) format(\"opentype\");\n unicode-range: U+00-FF;\n}\n\nIn this example, the font is to be used for characters in the range of U+00 to U+FF which runs from the unexciting control characters at the start of the Unicode table (symbols like the exclamation mark start at U+21) right through to \u00ff at U+FF \u2013 the extent of the Basic Latin character range.\n\nBy adding multiple @font-face rules for the same family but with different ranges, you can build up complete coverage of the characters your page uses by using different fonts.\n\nWhen I say that it\u2019s possible to specify the range of characters the font covers, that\u2019s true, but what you\u2019re really doing with the unicode-range property is declaring which characters the font should be used for. This becomes interesting, because instead of merely working with the technical constraints of available characters in a given font, we can start picking and choosing characters to use and selectively mix fonts together.\n\nThe best available ampersand\n\nA few years back, Dan Cederholm wrote a post encouraging designers to use the best available ampersand. Dan went on to outline how this can be achieved by wrapping our ampersands in a element with a class applied:\n\n&\n\nA CSS rule can then be written to select the and apply a different font:\n\nspan.amp {\n font-family: Baskerville, Palatino, \"Book Antiqua\", serif;\n}\n\nThat\u2019s a perfectly serviceable technique, but the drawbacks are clear \u2014 you have to add extra markup which is borderline presentational, and you also have to be able to add that markup, which isn\u2019t always possible when working with a CMS.\n\nPerhaps we could do this with unicode-range.\n\nA better best available ampersand\n\nThe Unicode code point for an ampersand is U+26, so the ampersand font stack above can be created like so:\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n\nWhat we\u2019ve done here is specify a new family called Ampersand and created a font stack for it with the user\u2019s locally installed copies of Baskerville, Palatino or Book Antiqua. We\u2019ve then limited it to a single character range \u2014 the ampersand. Of course, those don\u2019t need to be local fonts \u2014 they could be web font files, too. If you have a font with a really snazzy ampersand, go for your life.\n\nWe can then use that new family in a regular font stack.\n\nh1 {\n font-family: Ampersand, Arial, sans-serif;\n}\n\nWith this in place, any

elements in our page will use the Ampersand family (Baskerville, Palatino or Book Antiqua) for ampersands, and Arial for all other characters. If the user doesn\u2019t have any of the Ampersand family fonts available, the ampersand will fall back to the next item in the font stack, Arial.\n\nYou didn\u2019t think it was that easy, did you?\n\nOh, if only it were so. The problem comes, as ever, with the issue of browser support. The unicode-range property has good support in WebKit browsers (like Safari and Chrome, and the browsers on most popular smartphone platforms) and in recent versions of Internet Explorer. The big stumbling block comes in the form of Firefox, which has no support at all.\n\nIf you\u2019re familiar with how CSS works when it comes to unsupported properties, you\u2019ll know that if a browser encounters a property it doesn\u2019t implement, it just skips that declaration and moves on to the next. That works perfectly for things like border-radius \u2014 if the browser can\u2019t round off the corners, the declaration is skipped and the user sees square corners instead. Perfect.\n\nLess perfect when it comes to unicode-range, because if no range is specified then the default is that the font is applied for all characters \u2014 the whole range. If you\u2019re using a fancy font for flamboyant ampersands, you probably don\u2019t want that applied to all your text if unicode-range isn\u2019t supported. That would be bad. Really bad.\n\nEnsuring good fallbacks\n\nAs ever, the trick is to make sure that there\u2019s a sensible fallback in place if a browser doesn\u2019t have support for whatever technology you\u2019re trying to use. This is where being a super nerd about understanding the spec you\u2019re working with really pays off.\n\nWe can make use of the rules of the CSS cascade to make sure that if unicode-range isn\u2019t supported we get a sensible fallback font. What would be ideal is if we were able to follow up the @font-face rule with a second rule to override it if Unicode ranges aren\u2019t implemented.\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n@font-face {\n font-family: 'Ampersand';\n src: local('Arial');\n}\n\nIn theory, this code should make sense for all browsers. For those that support unicode-range the two rules become cumulative. They specify different ranges for the same family, and in WebKit browsers this has the expected result of using Arial for most characters, but Baskerville and friends for the ampersand. For browsers that don\u2019t have support, the second rule should just supersede the first, setting the font to Arial. \n\nUnfortunately, this code causes current versions of Firefox to freak out and use the first rule, applying Baskerville to the entire range. That\u2019s both unexpected and unfortunate. Bad Firefox. On your rug.\n\nIf that doesn\u2019t work, what can we do? Well, we know that if given a unicode-range Firefox will ignore the range and apply the font to all characters. That\u2019s really what we\u2019re trying to achieve. So what if we specified a range for the fallback font, but made sure it only covers some obscure high-value Unicode character we\u2019re never going to use in our page? Then it wouldn\u2019t affect the outcome for browsers that do support ranges.\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n@font-face {\n /* Ampersand fallback font */\n font-family: 'Ampersand';\n src: local('Arial');\n unicode-range: U+270C;\n}\n\nBy specifying a range on the fallback font, Firefox appears to correctly override the first based on the cascade sort order. Browsers that do support ranges take the second rule in addition, and apply Arial for that obscure character we\u2019re not using in any of our pages \u2014 U+270C.\n\nSo we get our nice ampersands in browsers that support unicode-range and, thanks to our styling of an obscure Unicode character, the font falls back to a perfectly acceptable Arial in browsers that do not offer support. Perfect!\n\nThat obscure character, my friends, is what Unicode defines as the VICTORY HAND.\n\n\u270c\n\nSo, how can we use this?\n\nAmpersands are a neat trick, and it works well in browsers that support ranges, but that\u2019s not really the point of all this. Styling ampersands is fun, but they\u2019re only really scratching the surface. Consider more involved examples, such as substituting a different font for numerals, or symbols, or even caps. Things certainly begin to get a bit more interesting.\n\nHow do you know what the codes are for different characters? Richard Ishida has a handy online conversion tool available where you can type in the characters and get the Unicode code points out the other end.\n\nOf course, the fact remains that browser support for unicode-range is currently limited, so any application needs to have fallbacks that you\u2019re still happy for a significant proportion of your visitors to see. In some cases, such as dedicated pages for mobile devices in an HTML-based phone app, this is immediately useful as support in WebKit browsers is already very good. In other cases, you\u2019ll have to use your own best judgement based on your needs and audience.\n\nOne thing to keep in mind is that if you\u2019re using web fonts, the entire font will be downloaded even if only one character is used. That said, the font shouldn\u2019t be downloaded if none of the characters within the Unicode range are present in a given page.\n\nAs ever, there are pros and cons to using unicode-range as well as varied but increasing support in browsers. It remains a useful tool to understand and have in your toolkit for when the right moment comes along.", "year": "2011", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2011-12-01T00:00:00+00:00", "url": "https://24ways.org/2011/creating-custom-font-stacks-with-unicode-range/", "topic": "code"} {"rowid": 101, "title": "Easing The Path from Design to Development", "contents": "As a web developer, I have the pleasure of working with a lot of different designers. There has been a lot of industry discussion of late about designers and developers, focusing on how different we sometimes are and how the interface between our respective phases of a project (that is to say moving from a design phase into production) can sometimes become a battleground.\n\nI don\u2019t believe it has to be a battleground. It\u2019s actually more like being a dance partner \u2013 our steps are different, but as long as we know our own part and have a little knowledge of our partner\u2019s steps, it all goes together to form a cohesive dance. Albeit with less spandex and fewer sequins (although that may depend on the project in question).\n\nAs the process usually flows from design towards development, it\u2019s most important that designers have a little knowledge of how the site is going to be built. At the specialist web development agency I\u2019m part of, we find that designs that have been well considered from a technical perspective help to keep the project on track and on budget.\n\nBased on that experience, I\u2019ve put together my checklist of things that designers should consider before handing their work over to a developer to build.\n\nLayout\n\nOne rookie mistake made by traditionally trained designers transferring to the web is to forget a web browser is not a fixed medium. Unlike designing a magazine layout or a piece of packaging, there are lots of available options to consider. Should the layout be fluid and resize with the window, or should it be set to a fixed width? If it\u2019s fluid, which parts expand and which not? If it\u2019s fixed, should it sit in the middle of the window or to one side?\n\nIf any part of the layout is going to be flexible (get wider and narrower as required), consider how any graphics are affected. Images don\u2019t usually look good if displayed at anything other that their original size, so should they behave? If a column is going to get wider than it\u2019s shown in the Photoshop comp, it may be necessary to provide separate wider versions of any background images.\n\nText size and content volume\n\nA related issue is considering how the layout behaves with both different sizes of text and different volumes of content. Whilst text zooming rather than text resizing is becoming more commonplace as the default behaviour in browsers, it\u2019s still a fundamentally important principal of web design that we are suggesting and not dictating how something should look. Designs must allow for a little give and take in the text size, and how this affects the design needs to be taken into consideration.\n\nKeep in mind that the same font can display differently in different places and platforms. Something as simple as Times will display wider on a Mac than on Windows. However, the main impact of text resizing is the change in how much vertical space copy takes up. This is particularly important where space is limited by the design (making text bigger causes many more problems than making text smaller). Each element from headings to box-outs to navigation items and buttons needs to be able to expand at least vertically, if not horizontally as well. This may require some thought about how elements on the page may wrap onto new lines, as well as again making sure to provide extended versions of any graphical elements.\n\nSimilarly, it\u2019s rare theses days to know exactly what content you\u2019re working with when a site is designed. Many, if not most sites are designed as a series of templates for some kind of content management system, and so designs cannot be tweaked around any specific item of content. Designs must be able to cope with both much greater and much lesser volumes of content that might be thrown in at the lorem ipsum phase.\n\nParticular things to watch out for are things like headings (how do they wrap onto multiple lines) and any user-generated items like usernames. It can be very easy to forget that whilst you might expect something like a username to be 8-12 characters, if the systems powering your site allow for 255 characters they\u2019ll always be someone who\u2019ll go there. Expect them to do so.\n\nAgain, if your site is content managed or not, consider the possibility that the structure might be expanded in the future. Consider how additional items might be added to each level of navigation. Whilst it\u2019s rarely desirable to make significant changes without revisiting the site\u2019s information architecture more thoroughly, it\u2019s an inevitable fact of life that the structure needs a little bit of flexibility to change over time.\n\nInteractions with and without JavaScript\n\nA great number of sites now make good use of JavaScript to streamline the user interface and make everything just that touch more usable. Remember, though, that any developer worth their salt will start by building the interface without JavaScript, get it all working, and then layer that JavaScript on top. This is to allow for users viewing the site without JavaScript available or enabled in their browser.\n\nDesigners need to consider both states of any feature they\u2019re designing \u2013 how it looks and functions with and without JavaScript. If the feature does something fancy with Ajax, consider how the same can be achieved with basic HTML forms, links and intermediary pages. These all need to be designed, because this is how some of your users will interact with the site.\n\nLogged in and logged out states\n\nWhen designing any type of web application or site that has a membership system \u2013 that is to say users can create an account and log into the site \u2013 the design will need to consider how any element is presented in both logged in and logged out states. For some items there\u2019ll be no difference, whereas for others there may be considerable differences.\n\nShould an item be hidden completely not logged out users? Should it look different in some way? Perhaps it should look the same, but prompt the user to log in when they interact with it. If so, what form should that prompt take on and how does the user progress through the authentication process to arrive back at the task they were originally trying to complete?\n\nCouple logged in and logged out states with the possible absence of JavaScript, and every feature needs to be designed in four different states:\n\n\n\tLogged out with JavaScript available\n\tLogged in with JavaScript available\n\tLogged out without JavaScript available\n\tLogged in without JavaScript available\n\n\nFonts\n\nThere are three main causes of war in this world; religions, politics and fonts. I\u2019ve said publicly before that I believe the responsibility for this falls squarely at the feet of Adobe Photoshop. Photoshop, like a mistress at a brothel, parades a vast array of ropey, yet strangely enticing typefaces past the eyes of weak, lily-livered designers, who can\u2019t help but crumble to their curvy charms.\n\nYet, on the web, we have to be a little more restrained in our choice of typefaces. The purest solution is always to make the best use of the available fonts, but this isn\u2019t always the most desirable solution from a design point of view. There are several technical solutions such as techniques that utilise Flash (like sIFR), dynamically generated images and even canvas in newer browsers. Discuss the best approach with your developer, as every different technique has different trade-offs, and this may impact the design in other ways.\n\nMessaging\n\nAny site that has interactive elements, from a simple contact form through to fully featured online software application, involves some kind of user messaging. By this I mean the error messages when something goes wrong and the success and thank-you messages when something goes right. These typically appear as the result of an interaction, so are easy to forget and miss off a Photoshop comp.\n\nFor every form, consider what gets displayed to the user if they make a mistake or miss something out, and also what gets displayed back when the interaction is successful. What do they see and where do the go next?\n\nWith Ajax interactions, the user doesn\u2019t get any visual feedback of the site waiting for a response from the server unless you design it that way. Consider using a \u2018waiting\u2019 or \u2018in progress\u2019 spinner to give the user some visual feedback of any background processes. How should these look? How do they animate?\n\nSimilarly, also consider the big error pages like a 404. With luck, these won\u2019t often be seen, but it\u2019s at the point that they are when careful design matters the most.\n\nForm fields\n\nDepending on the visual style of your site, the look of a browser\u2019s default form fields and buttons can sometimes jar. It\u2019s understandable that many a designer wants to change the way they look. Depending on the browser in question, various things can be done to style form fields and their buttons (although it\u2019s not as flexible as most would like).\n\nA common request is to replace the default buttons with a graphical button. This is usually achievable in most cases, although it\u2019s not easy to get a consistent result across all browsers \u2013 particularly when it comes to vertical positioning and the space surrounding the button. If the layout is very precise, or if space is at a premium, it\u2019s always best to try and live with the browser\u2019s default form controls.\n\nWhichever way you go, it\u2019s important to remember that in general, each form field should have a label, and each form should have a submit button. If you find that your form breaks either of those rules, you should double check.\n\nPractical tips for handing files over\n\nThere are a couple of basic steps that a design can carry out to make sure that the developer has the best chance of implementing the design exactly as envisioned.\n\nIf working with Photoshop of Fireworks or similar comping tool, it helps to group and label layers to make it easy for a developer to see which need to be turned on and off to get to isolate parts of the page and different states of the design. Also, if you don\u2019t work in the same office as your developer (and so they can\u2019t quickly check with you), provide a PDF of each page and state so that your developer can see how each page should look aside from any confusion with quick layers are switched on or off. These also act as a handy quick reference that can be used without firing up Photoshop (which can kill both productivity and your machine).\n\nFinally, provide a colour reference showing the RGB values of all the key colours used throughout the design. Without this, the developer will end up colour-picking from the comps, and could potentially end up with different colours to those you intended. Remember, for a lot of developers, working in a tool like Photoshop is like presenting a designer with an SSH terminal into a web server. It\u2019s unfamiliar ground and easy to get things wrong. Be the expert of your own domain and help your colleagues out when they\u2019re out of their comfort zone. That goes both ways.\n\nIn conclusion\n\nWhen asked the question of how to smooth hand-over between design and development, almost everyone who has experienced this situation could come up with their own list. This one is mine, based on some of the more common experiences we have at edgeofmyseat.com. So in bullet point form, here\u2019s my checklist for handing a design over.\n\n\n\tIs the layout fixed, or fluid?\n\tDoes each element cope with expanding for larger text and more content?\n\tAre all the graphics large enough to cope with an area expanding?\n\tDoes each interactive element have a state for with and without JavaScript?\n\tDoes each element have a state for logged in and logged out users?\n\tHow are any custom fonts being displayed? (and does the developer have the font to use?)\n\tDoes each interactive element have error and success messages designed?\n\tDo all form fields have a label and each form a submit button?\n\tIs your Photoshop comp document well organised?\n\tHave you provided flat PDFs of each state?\n\tHave you provided a colour reference?\n\tAre we having fun yet?", "year": "2008", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2008-12-01T00:00:00+00:00", "url": "https://24ways.org/2008/easing-the-path-from-design-to-development/", "topic": "process"} {"rowid": 166, "title": "Performance On A Shoe String", "contents": "Back in the summer, I happened to notice the official Wimbledon All England Tennis Club site had jumped to the top of Alexa\u2019s Movers & Shakers list \u2014 a list that tracks sites that have had the biggest upturn or downturn in traffic. The lawn tennis championships were underway, and so traffic had leapt from almost nothing to crazy-busy in a no time at all. \n\nMany sites have similar peaks in traffic, especially when they\u2019re based around scheduled events. No one cares about the site for most of the year, and then all of a sudden \u2013 wham! \u2013 things start getting warm in the data centre. Whilst the thought of chestnuts roasting on an open server has a certain appeal, it\u2019s less attractive if you care about your site being available to visitors. Take a look at this Alexa traffic graph showing traffic patterns for superbowl.com at the beginning of each year, and wimbledon.org in the month of July.\n\nTraffic graph from Alexa.com \n\nWhilst not on the same scale or with such dramatic peaks, we have a similar pattern of traffic here at 24ways.org. Over the last three years we\u2019ve seen a dramatic pick up in traffic over the month of December (as would be expected) and then a much lower, although steady load throughout the year. What we do have, however, is the luxury of knowing when the peaks will be. For a normal site, be that a blog, small scale web app, or even a small corporate site, you often just cannot predict when you might get slashdotted, end up on the front page of Digg or linked to from a similarly high-profile site. You just don\u2019t know when the peaks will be.\n\nIf you\u2019re a big commercial enterprise like the Super Bowl, scaling up for that traffic is simply a cost of doing business. But for most of us, we can\u2019t afford to have massive capacity sat there unused for 90% of the year. What you have to do instead is work out how to deal with as much traffic as possible with the modest resources you have.\n\nIn this article I\u2019m going to talk about some of the things we\u2019ve learned about keeping 24 ways running throughout December, whilst not spending a fortune on hosting we don\u2019t need for 11 months of each year. We\u2019ve not always got it right, but we\u2019ve learned a lot along the way.\n\nThe Problem\n\nTo know how to deal with high traffic, you need to have a basic idea of what happens when a request comes into a web server. 24 ways is hosted on a single small virtual dedicated server with a great little hosting company in the UK. We run Apache with PHP and MySQL all on that one server. When a request comes in a new Apache process is started to deal with the request (or assigned if there\u2019s one available not doing anything). Each process takes a bunch of memory, so there\u2019s a finite number of processes that you can run, and therefore a finite number of pages you can serve at once before your server runs out of memory.\n\nWith our budget based on whatever is left over after beer, we need to get best performance we can out of the resources available. As the goal is to serve as many pages as quickly as possible, there are several approaches we can take:\n\n\n\tReducing the amount of memory needed by each Apache process\n\tReducing the amount of time each process is needed\n\tReducing the number of requests made to the server\n\n\nYahoo! have published some information on what they call Exceptional Performance, which is well worth reading, and compliments many of my examples here. The Yahoo! guidelines very much look at things from a user perspective, which is always important.\n\nServer tweaking\n\nIf you\u2019re in the position of being able to change your server configuration (our set-up gives us root access to what is effectively a virtual machine) there are some basic steps you can take to maximise the available memory and reduce the memory footprint. Without getting too boring and technical (whole books have been written on this) there are a couple of things to watch out for.\n\nFirstly, check what processes you have running that you might not need. Every megabyte of memory that you free up might equate to several thousand extra requests being served each day, so take a look at top and see what\u2019s using up your resources. Quite often a machine configured as a web server will have some kind of mail server running by default. If your site doesn\u2019t use mail (ours doesn\u2019t) make sure it\u2019s shut down and not using resources.\n\nSecondly, have a look at your Apache configuration and particularly what modules are loaded. The method for doing this varies between versions of Apache, but again, every module loaded increases the amount of memory that each Apache process requires and therefore limits the number of simultaneous requests you can deal with.\n\nThe final thing to check is that Apache isn\u2019t configured to start more servers than you have memory for. This is usually done by setting the MaxClients directive. When that limit is reached, your site is going to stop responding to further requests. However, if all else goes well that threshold won\u2019t be reached, and if it does it will at least stop the weight of the traffic taking the entire server down to a point where you can\u2019t even log in to sort it out.\n\nThose are the main tidbits I\u2019ve found useful for this site, although it\u2019s worth repeating that entire books have been written on this subject alone.\n\nCaching\n\nAlthough the site is generated with PHP and MySQL, the majority of pages served don\u2019t come from the database. The process of compiling a page on-the-fly involves quite a few trips to the database for content, templates, configuration settings and so on, and so can be slow and require a lot of CPU. Unless a new article or comment is published, the site doesn\u2019t actually change between requests and so it makes sense to generate each page once, save it to a file and then just serve all following requests from that file.\n\nWe use QuickCache (or rather a plugin based on it) for this. The plugin integrates with our publishing system (Textpattern) to make sure the cache is cleared when something on the site changes. A similar plugin called WP-Cache is available for WordPress, but of course this could be done any number of ways, and with any back-end technology.\n\nThe important principal here is to reduce the time it takes to serve a page by compiling the page once and serving that cached result to subsequent visitors. Keep away from your database if you can.\n\nOutsource your feeds\n\nWe get around 36,000 requests for our feed each day. That really only works out at about 7,000 subscribers averaging five-and-a-bit requests a day, but it\u2019s still 36,000 requests we could easily do without. Each request uses resources and particularly during December, all those requests can add up. \n\nThe simple solution here was to switch our feed over to using FeedBurner. We publish the address of the FeedBurner version of our feed here, so those 36,000 requests a day hit FeedBurner\u2019s servers rather than ours. In addition, we get pretty graphs showing how the subscriber-base is building.\n\n\n\nOff-load big files\n\nLarger files like images or downloads pose a problem not in bandwidth, but in the time it takes them to transfer. A typical page request is very quick, a few seconds at the most, resulting in the connection being freed up promptly. Anything that keeps a connection open for a long time is going to start killing performance very quickly.\n\nThis year, we started serving most of the images for articles from a subdomain \u2013 media.24ways.org. Rather than pointing to our own server, this subdomain points to an Amazon S3 account where the files are held. It\u2019s easy to pigeon-hole S3 as merely an online backup solution, and whilst not a fully fledged CDN, S3 works very nicely for serving larger media files. The roughly 20GB of files served this month have cost around $5 in Amazon S3 charges. That\u2019s so affordable it may not be worth even taking the files back off S3 once December has passed.\n\nI found this article on Scalable Media Hosting with Amazon S3 to be really useful in getting started. I upload the files via a Firefox plugin (mentioned in the article) and then edit the ACL to allow public access to the files. The way S3 enables you to point DNS directly at it means that you\u2019re not tied to always using the service, and that it can be transparent to your users.\n\nIf your site uses photographs, consider uploading them to a service like Flickr and serving them directly from there. Many photo sharing sites are happy for you to link to images in this way, but do check the acceptable use policies in case you need to provide a credit or link back.\n\nOff-load small files\n\nYou\u2019ll have noticed the pattern by now \u2013 get rid of as much traffic as possible. When an article has a lot of comments and each of those comments has an avatar along with it, a great many requests are needed to fetch each of those images. In 2006 we started using Gravatar for avatars, but their servers were slow and were holding up page loads. To get around this we started caching the images on our server, but along with that came the burden of furnishing all the image requests.\n\nEarlier this year Gravatar changed hands and is now run by the same team behind WordPress.com. Those guys clearly know what they\u2019re doing when it comes to high performance, so this year we went back to serving avatars directly from them.\n\nIf your site uses avatars, it really makes sense to use a service like Gravatar where your users probably already have an account, and where the image requests are going to be dealt with for you. \n\nKnow what you\u2019re paying for\n\nThe server account we use for 24 ways was opened in November 2005. When we first hit the front page of Digg in December of that year, we upgraded the server with a bit more memory, but other than that we were still running on that 2005 spec for two years. Of course, the world of technology has moved on in those years, prices have dropped and specs have improved. For the same amount we were paying for that 2005 spec server, we could have an account with twice the CPU, memory and disk space.\n\nSo in November of this year I took out a new account and transferred the site from the old server to the new. In that single step we were prepared for dealing with twice the amount of traffic, and because of a special offer at the time I didn\u2019t even have to pay the setup cost on the new server. So it really pays to know what you\u2019re paying for and keep an eye out of ways you can make improvements without needing to spend more money.\n\nFurther steps\n\nThere\u2019s nearly always more that can be done. For example, there are some media files (particularly for older articles) that are not on S3. We also serve our CSS directly and it\u2019s not minified or compressed. But by tackling the big problems first we\u2019ve managed to reduce load on the server and at the same time make sure that the load being placed on the server can be dealt with in the most frugal way.\n\nOver the last 24 days we\u2019ve served up articles to more than 350,000 visitors without breaking a sweat. On a busy day, that\u2019s been nearly 20,000 visitors in just 24 hours. While in the grand scheme of things that\u2019s not a huge amount of traffic, it can be a lot if you\u2019re not prepared for it. However, with a little planning for the peaks you can help ensure that when the traffic arrives you\u2019re ready to capitalise on it.\n\nOf course, people only visit 24 ways for the wealth of knowledge and experience that\u2019s tied up in the articles here. Therefore I\u2019d like to take the opportunity to thank all our authors this year who have given their time as a gift to the community, and to wish you all a very happy Christmas.", "year": "2007", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2007-12-24T00:00:00+00:00", "url": "https://24ways.org/2007/performance-on-a-shoe-string/", "topic": "ux"} {"rowid": 315, "title": "Edit-in-Place with Ajax", "contents": "Back on day one we looked at using the Prototype library to take all the hard work out of making a simple Ajax call. While that was fun and all, it didn\u2019t go that far towards implementing something really practical. We dipped our toes in, but haven\u2019t learned to swim yet.\n\nSo here is swimming lesson number one. Anyone who\u2019s used Flickr to publish their photos will be familiar with the edit-in-place system used for quickly amending titles and descriptions on photographs. Hovering over an item turns its background yellow to indicate it is editable. A simple click loads the text into an edit box, right there on the page.\n\n\n\nPrototype includes all sorts of useful methods to help reproduce something like this for our own projects. As well as the simple Ajax GETs we learned how to do last time, we can also do POSTs (which we\u2019ll need here) and a whole bunch of manipulations to the user interface \u2013 all through simple library calls. Here\u2019s what we\u2019re building, so let\u2019s do it.\n\nGetting Started\n\nThere are two major components to this process; the user interface manipulation and the Ajax call itself. Our set-up is much the same as last time (you may wish to read the first article if you\u2019ve not already done so). We have a basic HTML page which links in the prototype.js file and our own editinplace.js. Here\u2019s what Santa dropped down my chimney: \n\n\n \n \n \n Edit-in-Place with Ajax\n \n \n \n \n\n

Edit-in-place

\n

Dashing through the snow on a one horse open sleigh.

\n \n \n\nSo that\u2019s our page. The editable item is going to be the

called desc. The process goes something like this:\n\n\n\tHighlight the area onMouseOver\n\tClear the highlight onMouseOut\n\tIf the user clicks, hide the area and replace with a ';\n\n var button = ' OR \n ';\n\n new Insertion.After(obj, textarea+button);\n\n Event.observe(obj.id+'_save', 'click', function(){saveChanges(obj)}, false);\n Event.observe(obj.id+'_cancel', 'click', function(){cleanUp(obj)}, false);\n\n }\n\nThe first thing to do is to hide the object. Prototype comes to the rescue with Element.hide() (and of course, Element.show() too). Following that, we build up the textarea and buttons as a string, and then use Insertion.After() to place our new editor underneath the (now hidden) editable object.\n\nThe last thing to do before we leave the user to edit is it attach listeners to the Save and Cancel buttons to call either the saveChanges() function, or to cleanUp() after a cancel.\n\nIn the event of a cancel, we can clean up behind ourselves like so:\n\nfunction cleanUp(obj, keepEditable){\n Element.remove(obj.id+'_editor');\n Element.show(obj);\n if (!keepEditable) showAsEditable(obj, true);\n }\n\nSaving the Changes\n\nThis is where all the Ajax fun occurs. Whilst the previous article introduced Ajax.Updater() for simple Ajax calls, in this case we need a little bit more control over what happens once the response is received. For this purpose, Ajax.Request() is perfect. We can use the onSuccess and onFailure parameters to register functions to handle the response.\n\nfunction saveChanges(obj){\n var new_content = escape($F(obj.id+'_edit'));\n\n obj.innerHTML = \"Saving...\";\n cleanUp(obj, true);\n\n var success = function(t){editComplete(t, obj);}\n var failure = function(t){editFailed(t, obj);}\n\n var url = 'edit.php';\n var pars = 'id=' + obj.id + '&content=' + new_content;\n var myAjax = new Ajax.Request(url, {method:'post',\n postBody:pars, onSuccess:success, onFailure:failure});\n }\n\n function editComplete(t, obj){\n obj.innerHTML = t.responseText;\n showAsEditable(obj, true);\n }\n\n function editFailed(t, obj){\n obj.innerHTML = 'Sorry, the update failed.';\n cleanUp(obj);\n }\n\nAs you can see, we first grab in the contents of the textarea into the variable new_content. We then remove the editor, set the content of the original object to \u201cSaving\u2026\u201d to show that an update is occurring, and make the Ajax POST.\n\nIf the Ajax fails, editFailed() sets the contents of the object to \u201cSorry, the update failed.\u201d Admittedly, that\u2019s not a very helpful way to handle the error but I have to limit the scope of this article somewhere. It might be a good idea to stow away the original contents of the object (obj.preUpdate = obj.innerHTML) for later retrieval before setting the content to \u201cSaving\u2026\u201d. No one likes a failure \u2013 especially a messy one.\n\nIf the Ajax call is successful, the server-side script returns the edited content, which we then place back inside the object from editComplete, and tidy up.\n\nMeanwhile, back at the server\n\nThe missing piece of the puzzle is the server-side script for committing the changes to your database. Obviously, any solution I provide here is not going to fit your particular application. For the purposes of getting a functional demo going, here\u2019s what I have in PHP.\n\n\n\nNot exactly rocket science is it? I\u2019m just catching the content item from the POST and echoing it back. For your application to be useful, however, you\u2019ll need to know exactly which record you should be updating. I\u2019m passing in the ID of my

, which is not a fat lot of use. You can modify saveChanges() to post back whatever information your app needs to know in order to process the update.\n\nYou should also check the user\u2019s credentials to make sure they have permission to edit whatever it is they\u2019re editing. Basically the same rules apply as with any script in your application.\n\nLimitations\n\nThere are a few bits and bobs that in an ideal world I would tidy up. The first is the error handling, as I\u2019ve already mentioned. The second is that from an idealistic standpoint, I\u2019d rather not be using innerHTML. However, the reality is that it\u2019s presently the most efficient way of making large changes to the document. If you\u2019re serving as XML, remember that you\u2019ll need to replace these with proper DOM nodes.\n\nIt\u2019s also important to note that it\u2019s quite difficult to make something like this universally accessible. Whenever you start updating large chunks of a document based on user interaction, a lot of non-traditional devices don\u2019t cope well. The benefit of this technique, though, is that if JavaScript is unavailable none of the functionality gets implemented at all \u2013 it fails silently. It is for this reason that this shouldn\u2019t be used as a complete replacement for a traditional, universally accessible edit form. It\u2019s a great time-saver for those with the ability to use it, but it\u2019s no replacement.\n\nSee it in action\n\nI\u2019ve put together an example page using the inert PHP script above. That is to say, your edits aren\u2019t committed to a database, so the example is reset when the page is reloaded.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-23T00:00:00+00:00", "url": "https://24ways.org/2005/edit-in-place-with-ajax/", "topic": "code"} {"rowid": 16, "title": "URL Rewriting for the Fearful", "contents": "I think it was Marilyn Monroe who said, \u201cIf you can\u2019t handle me at my worst, please just fix these rewrite rules, I\u2019m getting an internal server error.\u201d Even the blonde bombshell hated configuring URL rewrites on her website, and I think most of us know where she was coming from.\n\nThe majority of website projects I work on require some amount of URL rewriting, and I find it mildly enjoyable \u2014 I quite like a good rewrite rule. I suspect you may not share my glee, so in this article we\u2019re going to go back to basics to try to make the whole rigmarole more understandable.\n\nWhen we think about URL rewriting, usually that means adding some rules to an .htaccess file for an Apache web server. As that\u2019s the most common case, that\u2019s what I\u2019ll be sticking to here. If you work with a different server, there\u2019s often documentation specifically for translating from Apache\u2019s mod_rewrite rules. I even found an automatic converter for nginx.\n\nThis isn\u2019t going to be a comprehensive guide to every URL rewriting problem you might ever have. That would take us until Christmas. If you consider yourself a trial-and-error dabbler in the HTTP 500-infested waters of URL rewriting, then hopefully this will provide a little bit more of a basis to help you figure out what you\u2019re doing. If you\u2019ve ever found yourself staring at the white screen of death after screwing up your .htaccess file, don\u2019t worry. As Michael Jackson once insipidly whined, you are not alone.\n\nThe basics\n\nRewrite rules form part of the Apache web server\u2019s configuration for a website, and can be placed in a number of different locations as part of your virtual host configuration. By far the simplest and most portable option is to use an .htaccess file in your website root. Provided your server has mod_rewrite available, all you need to do to kick things off in your .htaccess file is:\n\nRewriteEngine on\n\nThe general formula for a rewrite rule is:\n\nRewriteRule URL/to/match URL/to/use/if/it/matches [options]\n\nWhen we talk about URL rewriting, we\u2019re normally talking about one of two things: redirecting the browser to a different URL; or rewriting the URL internally to use a particular file. We\u2019ll look at those in turn.\n\nRedirects\n\nRedirects match an incoming URL, and then redirect the user\u2019s browser to a different address. These can be useful for maintaining legacy URLs if content changes location as part of a site redesign. Redirecting the old URL to the new location makes sure that any incoming links, such as those from search engines, continue to work. \n\nIn 1998, Sir Tim Berners-Lee wrote that Cool URIs don\u2019t change, encouraging us all to go the extra mile to make sure links keep working forever. I think that sometimes it\u2019s fine to move things around \u2014 especially to correct bad URL design choices of the past \u2014 provided that you can do so while keeping those old URLs working. That\u2019s where redirects can help.\n\nA redirect might look like this\n\nRewriteRule ^article/used/to/be/here.php$ /article/now/lives/here/ [R=301,L]\n\nRewriting\n\nBy default, web servers closely map page URLs to the files in your site. On receiving a request for http://example.com/about/history.html the server goes to the configured folder for the example.com website, and then goes into the about folder and returns the history.html file.\n\nA rewrite rule changes that process by breaking the direct relationship between the URL and the file system. \u201cWhen there\u2019s a request for /about/history.html\u201d a rewrite rule might say, \u201cuse the file /about_section.php instead.\u201d\n\nThis opens up lots of possibilities for creative ways to map URLs to the files that know how to serve up the page. Most MVC frameworks will have a single rule to rewrite all page URLs to one single file. That file will be a script which kicks off the framework to figure out what to do to serve the page.\n\nRewriteRule ^for/this/url/$ /use/this/file.php [L] \n\nMatching patterns\n\nBy now you\u2019ll have noted the weird ^ and $ characters wrapped around the URL we\u2019re trying to match. That\u2019s because what we\u2019re actually using here is a pattern. Technically, it is what\u2019s called a Perl Compatible Regular Expression (PCRE) or simply a regex or regexp. We\u2019ll call it a pattern because we\u2019re not animals.\n\nWhat are these patterns? If I asked you to enter your credit card expiry date as MM/YY then chances are you\u2019d wonder what I wanted your credit card details for, but you\u2019d know that I wanted a two-digit month, a slash, and a two-digit year. That\u2019s not a regular expression, but it\u2019s the same idea: using some placeholder characters to define the pattern of the input you\u2019re trying to match.\n\nWe\u2019ve already met two regexp characters.\n\n\n\t^\n\tMatches the beginning of a string\n\t$\n\tMatches the end of a string\n\n\nWhen a pattern starts with ^ and ends with $ it\u2019s to make sure we match the complete URL start to finish, not just part of it. There are lots of other ways to match, too:\n\n\n\t[0-9]\n\tMatches a number, 0\u20139. [2-4] would match numbers 2 to 4 inclusive.\n\t[a-z]\n\tMatches lowercase letters a\u2013z\n\t[A-Z]\n\tMatches uppercase letters A\u2013Z\n\t[a-z0-9]\n\tCombining some of these, this matches letters a\u2013z and numbers 0\u20139\n\n\nThese are what we call character groups. The square brackets basically tell the server to match from the selection of characters within them. You can put any specific characters you\u2019re looking for within the brackets, as well as the ranges shown above. \n\nHowever, all these just match one single character. [0-9] would match 8 but not 84 \u2014 to match 84 we\u2019d need to use [0-9] twice.\n\n[0-9][0-9]\n\nSo, if we wanted to match 1984 we could to do this:\n\n[0-9][0-9][0-9][0-9] \n\n\u2026but that\u2019s getting silly. Instead, we can do this:\n\n[0-9]{4}\n\nThat means any character between 0 and 9, four times. If we wanted to match a number, but didn\u2019t know how long it might be (for example, a database ID in the URL) we could use the + symbol, which means one or more.\n\n[0-9]+\n\nThis now matches 1, 123 and 1234567.\n\nPutting it into practice\n\nLet\u2019s say we need to write a rule to match article URLs for this website, and to rewrite them to use /article.php under the hood. The articles all have URLs like this:\n\n2013/article-title/\n\nThey start with a year (from 2005 up to 2013, currently), a slash, and then have a URL-safe version of the article title (a slug), ending in a slash. We\u2019d match it like this:\n\n^[0-9]{4}/[a-z0-9-]+/$\n\nIf that looks frightening, don\u2019t worry. Breaking it down, from the start of the URL (^) we\u2019re looking for four numbers ([0-9]{4}). Then a slash \u2014 that\u2019s just literal \u2014 and then anything lowercase a\u2013z or 0\u20139 or a dash ([a-z0-9-]) one or more times (+), ending in a slash (/$).\n\nPutting that into a rewrite rule, we end up with this:\n\nRewriteRule ^[0-9]{4}/[a-z0-9-]+/$ /article.php\n\nWe\u2019re getting close now. We can match the article URLs and rewrite them to use article.php. Now we just need to make sure that article.php knows which article it\u2019s supposed to display.\n\nCapturing groups, and replacements\n\nWhen rewriting URLs you\u2019ll often want to take important parts of the URL you\u2019re matching and pass them along to the script that handles the request. That\u2019s usually done by adding those parts of the URL on as query string arguments. For our example, we want to make sure that article.php knows the year and the article title we\u2019re looking for. That means we need to call it like this:\n\n/article.php?year=2013&slug=article-title\n\nTo do this, we need to mark which parts of the pattern we want to reuse in the destination. We do this with round brackets or parentheses. By placing parentheses around parts of the pattern we want to reuse, we create what\u2019s called a capturing group. To capture an important part of the source URL to use in the destination, surround it in parentheses.\n\nOur pattern now looks like this, with parentheses around the parts that match the year and slug, but ignoring the slashes:\n\n^([0-9]{4})/([a-z0-9-]+)/$ \n\nTo use the capturing groups in the destination URL, we use the dollar sign and the number of the group we want to use. So, the first capturing group is $1, the second is $2 and so on. (The $ is unrelated to the end-of-pattern $ we used before.)\n\nRewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 \n\nThe value of the year capturing group gets used as $1 and the article title slug is $2. Had there been a third group, that would be $3 and so on. In regexp parlance, these are called back-references as they refer back to the pattern.\n\nOptions\n\nSeveral brain-taxing minutes ago, I mentioned some options as the final part of a rewrite rule. There are lots of options (or flags) you can set to change how the rule is processed. The most useful (to my mind) are:\n\n\n\tR=301\n\tPerform an HTTP 301 redirect to send the user\u2019s browser to the new URL. A status of 301 means a resource has moved permanently and so it\u2019s a good way of both redirecting the user to the new URL, and letting search engines know to update their indexes.\n\tL\n\tLast. If this rule matches, don\u2019t bother processing the following rules.\n\n\nOptions are set in square brackets at the end of the rule. You can set multiple options by separating them with commas:\n\nRewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 [L]\n\nor\n\nRewriteRule ^about/([a-z0-9-]+).jsp/$ /about/$1/ [R=301,L] \n\nCommon pitfalls\n\nOnce you\u2019ve built up a few rewrite rules, things can start to go wrong. You may have been there: a rule which looks perfectly good is somehow not matching. One common reason for this is hidden behind that [L] flag. \n\nL for Last is a useful option to tell the rewrite engine to stop once the rule has been matched. This is what it does \u2014 the remaining rules in the .htaccess file are then ignored. However, once a URL has been rewritten, the entire set of rules are then run again on the new URL. If the new URL matches any of the rules, that too will be rewritten and on it goes. \n\nOne way to avoid this problem is to keep your \u2018real\u2019 pages under a folder path that will never match one of your rules, or that you can exclude from the rewrite rules.\n\nUseful snippets\n\nI find myself reusing the same few rules over and over again, just with minor changes. Here are some useful examples to refer back to.\n\nExcluding a directory\n\nAs mentioned above, if you\u2019re rewriting lots of fancy URLs to a collection of real files it can be helpful to put those files in a folder and exclude it from rewrite rules. This helps solve the issue of rewrite rules reapplying to your newly rewritten URL. To exclude a directory, put a rule like this at the top of your file, before your other rules. Our files are in a folder called _source, the dash in the rule means do nothing, and the L flag means the following rules won\u2019t be applied.\n\nRewriteRule ^_source - [L]\n\nThis is also useful for excluding things like CMS folders from your website\u2019s rewrite rules\n\nRewriteRule ^perch - [L] \n\nAdding or removing www from the domain\n\nSome folk like to use a www and others don\u2019t. Usually, it\u2019s best to pick one and go with it, and redirect the one you don\u2019t want. On this site, we don\u2019t use www.24ways.org so we redirect those requests to 24ways.org.\n\nThis uses a RewriteCond which is like an if for a rewrite rule: \u201cIf this condition matches, then apply the following rule.\u201d In this case, it\u2019s if the HTTP HOST (or domain name, basically) matches this pattern, then redirect everything:\n\nRewriteCond %{HTTP_HOST} ^www.24ways.org$ [NC]\nRewriteRule ^(.*)$ http://24ways.org/$1 [R=301,L]\n\nThe [NC] flag means \u2018no case\u2019 \u2014 the match is case-insensitive. The dots in the domain are escaped with a backslash, as a dot is a regular expression character which means match anything, so we escape it because we literally mean a dot in this instance.\n\nRemoving file extensions\n\nSometimes all you need to do to tidy up a URL is strip off the technology-specific file extension, so that /about/history.php becomes /about/history. This is easily achieved with the help of some more rewrite conditions.\n\nRewriteCond %{REQUEST_FILENAME} !-f\nRewriteCond %{REQUEST_FILENAME} !-d\nRewriteCond %{REQUEST_FILENAME}.php -f\nRewriteRule ^(.+)$ $1.php [L,QSA]\n\nThis says if the file being asked for isn\u2019t a file (!-f) and if it isn\u2019t a directory (!-d) and if the file name plus .php is an actual file (-f) then rewrite by adding .php on the end. The QSA flag means \u2018query string append\u2019: append the existing query string onto the rewritten URL.\n\nIt\u2019s these sorts of more generic catch-all rules that you need to watch out for when your .htaccess gets rerun after a successful match. Without care they can easily rematch the newly rewritten URL.\n\nLogging for when it all goes wrong\n\nAlthough not possible within your .htaccess file, if you have access to your Apache configuration files you can enable rewrite logging. This can be useful to track down where a rule is going wrong, if it\u2019s matching incorrectly or failing to match. It also gives you an overview of the amount of work being done by the rewrite engine, enabling you to rearrange your rules and maximise performance.\n\nRewriteEngine On\nRewriteLog \"/full/system/path/to/rewrite.log\"\nRewriteLogLevel 5\n\nTo be doubly clear: this will not work from an .htaccess file \u2014 it needs to be added to the main Apache configuration files. (I sometimes work using MAMP PRO locally on my Mac, and this can be pasted into the snappily named Customized virtual host general settings box in the Advanced tab for your site.)\n\nThe white screen of death\n\nOne of the most frustrating things when working with rewrite rules is that when you make a mistake it can result in the server returning an HTTP 500 Internal Server Error. This in itself isn\u2019t an error message, of course. It\u2019s more of a notification that an error has occurred. The real error message can usually be found in your Apache error log.\n\nIf you have access to your server logs, check the Apache error log and you\u2019ll usually find a much more descriptive error message, pointing you towards your mistake. (Again, if using MAMP PRO, go to Server, Apache and the View Log button.)\n\nIn conclusion\n\nRewriting URLs can be a bear, but the advantages are clear. Keeping a tidy URL structure, disconnected from the technology or file structure of your site can result in URLs that are easier to use and easier to maintain into the future.\n\nIf you\u2019re redesigning a site, remember that cool URIs don\u2019t change, so budget some time to make sure that any content you move has a rewrite rule associated with it to keep any links working.\n\nFurther reading\n\nTo find out more about URL rewriting and perhaps even learn more about regular expressions, I can recommend the following resources.\n\n\n\tFrom the horse\u2019s mouth, the Apache mod_rewrite documentation\n\tParticularly useful with that documentation is the RewriteRule Flags listing\n\tYou may wish to don sunglasses to follow the otherwise comprehensive Regular-Expressions.info tutorial\n\tFriend of 24 ways, Neil Crosby has a mod_rewrite Beginner\u2019s Guide which I\u2019ve found handy over the years.\n\n\nAs noted at the start, this isn\u2019t a fully comprehensive guide, but I hope it\u2019s useful in finding your feet with a powerful but sometimes annoying technology. Do you have useful snippets you often use on projects? Feel free to share them in the comments.", "year": "2013", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2013-12-01T00:00:00+00:00", "url": "https://24ways.org/2013/url-rewriting-for-the-fearful/", "topic": "code"} {"rowid": 29, "title": "What It Takes to Build a Website", "contents": "In 1994 we lost Kurt Cobain and got the world wide web as a weird consolation prize. In the years that followed, if you\u2019d asked me if I knew how to build a website I\u2019d have said yes, I know HTML, so I know how to build a website. If you\u2019d then asked me what it takes to build a website, I\u2019d have had to admit that HTML would hardly feature.\n\nAmong the design nerdery and dev geekery it\u2019s easy to think that the nuts and bolts of building a page just need to be multiplied up and Ta-da! There\u2019s your website. That can certainly be true with weekend projects and hackery for fun. It works for throwing something together on GitHub or experimenting with ideas on your personal site. But what about working professionally on client projects?\n\nThe web is important, so we need to build it right.\n\nIt\u2019s 2015 \u2013 your job involves people paying you money for building websites. What does it take to build a website and to do it right? What practices should we adopt to make really great, successful and professional web projects in 2015? I put that question to some friends and 24 ways authors to see what they thought.\n\nGetting the tech right\n\nInevitably, it all starts with the technology. We work in a technical medium, after all. From Notepad and WinFTP through to continuous integration and deployment \u2013 how do you build sites?\n\nCreate a stable development environment\n\nThere\u2019s little more likely to send a web developer into a wild panic and a client into a wild rage than making a new site live and things just not working. That\u2019s why it\u2019s important to have realistic development and staging environments that mimic the live server as closely as possible.\n\nAre you in the habit of developing new sites right on the client\u2019s server? Or maybe in a subfolder on your local machine? It\u2019s time to reconsider.\n\nCharlie Perrins writes:\n\n\n\tDon\u2019t work on a live server \u2013 this feels like one of those gear-changing moments for a developer\u2019s growth. Build something that works just as well locally on your own machine as it does on a live server, and capture the differences in the code between the local and live version in a single config file. Ultimately, if you can get all the differences between environments down to a config level then you\u2019ll be in a really good position to automate the deployment process at some point in the future.\n\n\nAnything that creates a significant difference between the development and the live environments has the potential to cause problems you won\u2019t know about until the site goes live \u2013 and at that point the problems are very public and very embarrassing, not to mention unprofessional.\n\nA reasonable solution is to use a tool like MAMP PRO which enables you to set up an individual local website for each project you work on. Importantly, individual sites give you both consistency of paths between development and live, but also the ability to configure server options (like PHP versions and configuration, for example) to match the live site.\n\nBetter yet is to use a virtual machine, managed with a tool such as Vagrant. If you\u2019re interested in learning more about that, we have an article on that subject later in the series.\n\nUse source control\n\nTrent Walton writes:\n\n\n\tWe use source control, and it\u2019s become the centerpiece for how we handle collaboration, enhancements, and issues. It drives our process.\n\n\nI\u2019m hoping by now that you\u2019re either using source control for all your work, or feeling a nagging guilt that you should be. Be it Git, Mercurial, Subversion (name your poison), a revision control system enables you to keep track of changes, revert anything that breaks, and keep rolling backups of your project.\n\nThe benefits only start there, and Charlie Perrins recommends using source control \u201cnot just as a personal backup of your code, but as a way to play nicely with other developers.\u201c\n\nNoting the benefits when collaborating with other developers, he adds:\n\n\n\tGraduating from being the sole architect of your codebase to contributing to a shared codebase is a huge leap for a developer. Perhaps a practical way for people who tend to work on their own to do that would be to submit a pull request or a patch to an open source project or plugin.\u201d\n\n\nRichard Rutter of Clearleft sees clear advantages for the client, too. He recommends using source control \u201cpreferably in some sort of collaborative environment that you can open up or hand over to the client\u201d \u2013 a feature found with hosted services such as GitHub.\n\nIf you\u2019d like to hone your Git skills, Emma Jane Westby wrote Git for Grown-ups in last year\u2019s 24 ways.\n\nDon\u2019t repeat, automate!\n\nTim Kadlec is a big proponent of automating your build process:\n\n\n\tI\u2019ve been hammering that home to every client I\u2019ve had this year. It\u2019s amazing how many companies don\u2019t really have a formal build/deployment process in place. So many issues on the web (performance, accessibility, etc.) can be greatly improved just by having a layer of automation involved.\n\n\tFor example, graphic editing software spits out ridiculously bloated images. Very frequently, that\u2019s what ends up getting put on a site. If you have a build process, you can have the compression automated and start seeing immediate gains for no effort. On a recent project, they were able to shave around 1.5MB from their site weight simply by automating compression.\n\n\nOnce you have your code in source control, some of that automation can be made easier. Brian Suda writes:\n\n\n\tWe have a few bash scripts that run on git commit: they compile the less, jslint and remove white-space, basically the 3 Cs, Compress, Concatenate, Combine. This is now part of our workflow without even realising it.\n\n\nOne great way to get started with a build process is to use a tool like Grunt, and a great way to get started with Grunt is to read Chris Coyier\u2019s Grunt for People Who Think Things Like Grunt are Weird and Hard.\n\nTim reinforces:\n\n\n\tIssues like [image compression] \u2014 or simple accessibility issues like alt tags on images \u2014 should never be able to hit a live server. If you can detect it, you can automate it. And if you can automate it, you can free up time for designers and developers to focus on more challenging \u2014 and interesting \u2014 problems.\n\n\nA clear call to arms to tighten up and formalise development and deployment practices. The less that has to be done manually or is susceptible to change, the less that can go wrong when a site is built and deployed. Any procedures that are automated are no longer dependant on a single person\u2019s knowledge, making it easier to build your team or just cope when someone important is out of the office or leaves.\n\nIf you\u2019re interested in kicking the FTP habit and automating your site deployments, we have an article later in the series just for you.\n\nBuild systems, not sites\n\nOne big theme arising this year was that of building websites as systems, not as individual pages.\n\nBrad Frost:\n\n\n\tFor me, teams making websites in 2015 shouldn\u2019t be working on just-another-redesign redesign. People are realizing that in order to make stable, future-friendly, scalable, extensible web experiences they\u2019re going to need to think more systematically. That means crafting deliberate and thoughtful design systems. That means establishing front-end style guides. That means killing the out-dated, siloed, assembly-line waterfall process and getting cross-disciplinary teams working together in meaningful ways. That means treating development as design. That means treating performance as design. That means taking the time out of the day to establish the big picture, rather than aimlessly crawling along quarter by quarter.\n\n\nDesigner and developer Jina Bolton also advocates the use of style guides, and recommends making the guide a project deliverable:\n\n\n\tConsider adding on a style guide/UI library to your project as a deliverable for maintainability and thinking through all UI elements and components.\n\n\nVal Head agrees: \u201cbuild and maintain a style guide for each project\u201d she wrote. On the subject of approaching a redesign, she added:\n\n\n\tA UI inventory goes a long way to helping get your head around what a design system needs in the early stages of a redesign project.\n\n\nSo what about that old chestnut, responsive web design? Should we be making sites responsive by default? How about mobile first?\n\nRichard Rutter:\n\n\n\tThink mobile first unless you have a very good reason not to. Remember to take the client with you on this principle, otherwise it won\u2019t work as a convincing piece of design.\n\n\nTrent Walton adds:\n\n\n\tThe more you can test and sort of skew your perception for what is typical on the web, the better. 4k displays hooked up to 100Mbps connections can make one extremely unsympathetic.\n\n\nThe value of testing with real devices is something Ruth John appreciates. She wrote:\n\n\n\tI still have my own small device lab at home, even though I work permanently for a well-established company (which has a LOT of devices at its disposal) \u2013 it just means I can get a good overview of how things are looking during development.\n\n\nAnd speaking of systems, Mark Norman Francis recommends the use of measuring tools to aid the design process; \u201c[U]se analytics and make decisions from actual data\u201d he suggests, rather than relying totally on intuition.\n\nTim Kadlec adds a word on performance planning:\n\n\n\tI think having a performance budget in place should now be a given on any project. We\u2019ve proven pretty conclusively through a hundred and one case studies that performance matters. And over the last year or so, we\u2019ve really seen a lot of great tools emerge to help track and enforce performance budgets. There\u2019s not really a good excuse for not using one any more.\n\n\nIt\u2019s clear that in the four years since Ethan Marcotte\u2019s Responsive Web Design article the diversity of screen sizes, network connection speeds and input methods has only increased. New web projects should presume visitors will be using anything from a watch up to a big screen desktop display, and from being offline, through to GPRS, 3G and fast broadband.\n\nWill it take more time to design and build for those constraints? Yes, it most likely will. If Internet Explorer is brave enough to ask to be your default browser, you can be brave enough to tell your client they need to build responsively.\n\nWorking collaboratively\n\nA big part of delivering a successful website project is how we work together, both as a design team and a wider project team with the client.\n\nVal Head recommends an open line of communication:\n\n\n\tKeep conversations going. With clients, with teammates. Talking is so important with the way we work now. A good team conversation place, like Slack, is slowly becoming invaluable for me too.\n\n\nRuth John agrees:\n\n\n\tWe\u2019ve recently opened up our lines of communication by using Slack. It has transformed the way we work. We\u2019re easily more productive and collaborative on projects, as well as making it a lot easier for us all to work remotely (including freelancers).\n\n\nShe goes on to point out how tools can be combined to ease team communication without adding further complications:\n\n\n\tWe have a private GitHub organisation (which everyone who works with us is granted access to), which not only holds all our project code but also a team wiki. This has lots of information to get you set up within the team, as well as coding guidelines and best practices and other admin info, like contact numbers/emails for the team.\n\n\nSmall-A agile is also the theme of the day, with Mark Norman Francis suggesting an approach of \u201csmall iterations with constant feedback around individual features, not spec-it-all-first\u201d. He also encourages you to review as you go, at each stage of the project:\n\n\n\tAlways reflect on what went well and what went badly, and how you can learn from that, even if not Doing Agile\u2122. Ultimately \u201cbest practices\u201d should come from learning lessons (both good and bad).\n\n\nRichard Rutter echoes this, warning against working in isolation from the client for too long:\n\n\n\tAvoid big reveals. Your engagement with the client should be participatory. In business no one likes surprises.\n\n\nThis experience rings true for Ruth John who recommends involving real users in the feedback loop, not just the client:\n\n\n\tWe also try and get feedback on what we\u2019re building as soon and as often as we can with our stakeholders/clients and real users.\n\n\nWe should also remember that our role is to serve the client\u2019s needs, not just bill them for whatever we can. Brian Suda adds:\n\n\n\tDon\u2019t sell clients on things they don\u2019t need. We can spout a lot of jargon and scare clients into thinking you are a god. We can do things few can now, but you can\u2019t rip people off because they are unknowledgeable.\n\n\nBut do clients know what they\u2019re getting, even when they see it? Trent Walton has an interesting take:\n\n\n\tWe focus on prototypes over image-based comps at all costs, especially when meetings are involved. It\u2019s much easier to assess a prototype, and too often with image-based comps, discussions devolve into how something might feel when actually live, or how a layout could change to fit a given viewport.\n\n\nVal Head also likes to get work into the browser:\n\n\n\tSketch design ideas with any software you like, but get to the browser as soon as possible.\n\n\nBeyond your immediate team, Emma Jane Westby has advice for looking further afield:\n\n\n\tInvest time into building relationships within your (technical) community. You never know when you might be able to lend a hand; or benefit from someone who\u2019s able to lend theirs.\n\n\nAnd when things don\u2019t go according to plan, Brian Suda has the following advice:\n\n\n\tIf something doesn\u2019t work out, be professional and don\u2019t burn bridges. It will always come back to you.\n\n\nThe best work comes from working collaboratively, not just as a team within an agency or department, but with the client and stakeholders too. If doing your job and chucking it over the fence ever worked, it certainly doesn\u2019t fly any more. You can work in isolation, but doing really great work requires collaboration.\n\nThe business end\n\nWhen you\u2019re building sites professionally, every team member has to think about the business aspects. Estimating time, setting billing rates, and establishing deliverables are all part of the job.\n\nIn 2008, Andrew Clarke gave us the Contract Killer sample contract we could use to establish a working agreement for a web design project. Richard Rutter agrees that contracts are still an essential part of business:\n\n\n\tThey are there for both parties\u2019 protection. Make sure you know what will happen if you decide you don\u2019t want to work with the client any more (it happens) and, of course, what circumstances mean they can stop taking your services.\n\n\nHaving a contract is one thing, but does it adequately protect both you and the client? Emma Jane Westby adds:\n\n\n\tFind a good IP lawyer/legal counsel. I routinely had an IP lawyer read all of my contracts to find loopholes I wouldn\u2019t have noticed. I didn\u2019t always change the contract, but at least I knew what might come back to bite me.\n\n\nSo, you have a contract in place, and know what the project is. Brian Suda recommends keeping track of time and making sure you bill fairly for the hours the project costs you:\n\n\n\tIf I go to a meeting and they are 15 minutes late, the billing clock has already started. They can\u2019t expect me to be in the 1h meeting and not bill for the extra 15\u201330 minutes they wasted. It goes both ways too. You need to do your best to respect their deadlines and time frame \u2013 this is always hard to get right.\n\n\nAs ever, it\u2019s good business to do good business. Perhaps we can at last shed the old image of web designers being snowboarding layabouts and demonstrate to clients that we care as much about conducting professional business as they do.\n\nTime to review\n\nIt\u2019s a lot to take in. Some of these ideas and practices will be familiar, others new and yet to be evaluated. The web moves at a fast pace, and we need to be constantly reexamining our tools, techniques and working practices. The most important thing is not to blindly adopt any and all suggestions, but to carefully look at what the benefits might be and decide how they apply to your work.\n\nCould you benefit from more formalised development and deployment procedures? Would your design projects run more smoothly and have a longer maintainable life if you approached the solution as a componentised system rather than a series of pages? Are your teams structured in a way that enables the most fluid communication, or are there changes you could make? Are your billing procedures and business agreements serving you and your clients in the best way possible?\n\nThe new year is a good time to look at your working practices and see what can be improved, and maybe this time next year you\u2019ll look back and think \u201cthank goodness we don\u2019t work like that any more\u201d.", "year": "2014", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2014-12-01T00:00:00+00:00", "url": "https://24ways.org/2014/what-it-takes-to-build-a-website/", "topic": "business"} {"rowid": 132, "title": "Tasty Text Trimmer", "contents": "In most cases, when designing a user interface it\u2019s best to make a decision about how data is best displayed and stick with it. Failing to make a decision ultimately leads to too many user options, which in turn can be taxing on the poor old user. \n\nUnder some circumstances, however, it\u2019s good to give the user freedom in customising their workspace. One good example of this is the \u2018Article Length\u2019 tool in Apple\u2019s Safari RSS reader. Sliding a slider left of right dynamically changes the length of each article shown. It\u2019s that kind of awesomey magic stuff that\u2019s enough to keep you from sleeping. Let\u2019s build one.\n\nThe Setup\n\nLet\u2019s take a page that has lots of long text items, a bit like a news page or like Safari\u2019s RSS items view. If we were to attach a class name to each element we wanted to resize, that would give us something to hook onto from the JavaScript. \n\nExample 1: The basic page.\n\nAs you can see, I\u2019ve wrapped my items in a DIV and added a class name of chunk to them. It\u2019s these chunks that we\u2019ll be finding with the JavaScript. Speaking of which \u2026\n\nOur Core Functions\n\nThere are two main tasks that need performing in our script. The first is to find the chunks we\u2019re going to be resizing and store their original contents away somewhere safe. We\u2019ll need this so that if we trim the text down we\u2019ll know what it was if the user decides they want it back again. We\u2019ll call this loadChunks. \n\nvar loadChunks = function(){\n\tvar everything = document.getElementsByTagName('*');\n\tvar i, l;\n\tchunks\t= [];\n\tfor (i=0, l=everything.length; i -1){\n\t\t\tchunks.push({\n\t\t\t\tref: everything[i],\n\t\t\t\toriginal: everything[i].innerHTML\n\t\t\t});\n\t\t}\n\t}\n};\n\nThe variable chunks is stored outside of this function so that we can access it from our next core function, which is doTrim.\n\nvar doTrim = function(interval) {\n\tif (!chunks) loadChunks();\n\tvar i, l;\n\tfor (i=0, l=chunks.length; i\n \n
\n\nAs you can see we have a standard unordered list which is then styled with CSS to look like tabs. By giving each tab a class which describes it\u2019s logical section of the site, if we were to then apply a class to the body tag of each page showing the same, we could write a clever CSS selector to highlight the correct tab on any given page. \n\nSound complicated? Well, it\u2019s not a trivial concept, but actually applying it is dead simple.\n\nModifying the markup\n\nFirst thing is to place a class name on each li in the list:\n\n
\n \n
\n\nThen, on each page of your site, apply the a matching class to the body tag to indicate which section of the site that page is in. For example, on your About page:\n\n...\n\nWriting the CSS selector\n\nYou can now write a single CSS rule to match the selected tab on any given page. The logic is that you want to match the \u2018about\u2019 tab on the \u2018about\u2019 page and the \u2018products\u2019 tab on the \u2018products\u2019 page, so the selector looks like this:\n\nbody.home #navigation li.home,\n body.about #navigation li.about,\n body.work #navigation li.work,\n body.products #navigation li.products,\n body.contact #navigation li.contact{\n ... whatever styles you need to show the tab selected ...\n } \n\nSo all you need to do when you create a new page in your site is to apply a class to the body tag to say which section it\u2019s in. The CSS will do the rest for you \u2013 without any server-side help.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-10T00:00:00+00:00", "url": "https://24ways.org/2005/auto-selecting-navigation/", "topic": "code"} {"rowid": 165, "title": "Transparent PNGs in Internet Explorer 6", "contents": "Newer breeds of browser such as Firefox and Safari have offered support for PNG images with full alpha channel transparency for a few years. With the use of hacks, support has been available in Internet Explorer 5.5 and 6, but the hacks are non-ideal and have been tricky to use. With IE7 winning masses of users from earlier versions over the last year, full PNG alpha-channel transparency is becoming more of a reality for day-to-day use.\n\nHowever, there are still numbers of IE6 users out there who we can\u2019t leave out in the cold this Christmas, so in this article I\u2019m going to look what we can do to support IE6 users whilst taking full advantage of transparency for the majority of a site\u2019s visitors.\n\nSo what\u2019s alpha channel transparency?\n\nCast your minds back to the Ghost of Christmas Past, the humble GIF. Images in GIF format offer transparency, but that transparency is either on or off for any given pixel. Each pixel\u2019s either fully transparent, or a solid colour. In GIF, transparency is effectively just a special colour you can chose for a pixel.\n\nThe PNG format tackles the problem rather differently. As well as having any colour you chose, each pixel also carries a separate channel of information detailing how transparent it is. This alpha channel enables a pixel to be fully transparent, fully opaque, or critically, any step in between.\n\nThis enables designers to produce images that can have, for example, soft edges without any of the \u2018halo effect\u2019 traditionally associated with GIF transparency. If you\u2019ve ever worked on a site that has different colour schemes and therefore requires multiple versions of each graphic against a different colour, you\u2019ll immediately see the benefit. \n\nWhat\u2019s perhaps more interesting than that, however, is the extra creative freedom this gives designers in creating beautiful sites that can remain web-like in their ability to adjust, scale and reflow.\n\nThe Internet Explorer problem\n\nUp until IE7, there has been no fully native support for PNG alpha channel transparency in Internet Explorer. However, since IE5.5 there has been some support in the form of proprietary filter called the AlphaImageLoader. Internet Explorer filters can be applied directly in your CSS (for both inline and background images), or by setting the same CSS property with JavaScript. \n\nCSS:\n\nimg {\n\tfilter: progid:DXImageTransform.Microsoft.AlphaImageLoader(...);\n}\n\nJavaScript:\n\nimg.style.filter = \"progid:DXImageTransform.Microsoft.AlphaImageLoader(...)\";\n\nThat may sound like a problem solved, but all is not as it may appear. Firstly, as you may realise, there\u2019s no CSS property called filter in the W3C CSS spec. It\u2019s a proprietary extension added by Microsoft that could potentially cause other browsers to reject your entire CSS rule. \n\nSecondly, AlphaImageLoader does not magically add full PNG transparency support so that a PNG in the page will just start working. Instead, when applied to an element in the page, it draws a new rendering surface in the same space that element occupies and loads a PNG into it. If that sounds weird, it\u2019s because that\u2019s precisely what it is. However, by and large the result is that PNGs with an alpha channel can be accommodated.\n\nThe pitfalls\n\nSo, whilst support for PNG transparency in IE5.5 and 6 is possible, it\u2019s not without its problems.\n\nBackground images cannot be positioned or repeated\n\nThe AlphaImageLoader does work for background images, but only for the simplest of cases. If your design requires the image to be tiled (background-repeat) or positioned (background-position) you\u2019re out of luck. The AlphaImageLoader allows you to set a sizingMethod to either crop the image (if necessary) or to scale it to fit. Not massively useful, but something at least.\n\nDelayed loading and resource use\n\nThe AlphaImageLoader can be quite slow to load, and appears to consume more resources than a standard image when applied. Typically, you\u2019d need to add thousands of GIFs or JPEGs to a page before you saw any noticeable impact on the browser, but with the AlphaImageLoader filter applied Internet Explorer can become sluggish after just a handful of alpha channel PNGs.\n\nThe other noticeable effect is that as more instances of the AlphaImageLoader are applied, the longer it takes to render the PNGs with their transparency. The user sees the PNG load in its original non-supported state (with black or grey areas where transparency should be) before one by one the filter kicks in and makes them properly transparent.\n\nBoth the issue of sluggish behaviour and delayed load only really manifest themselves with volume and size of image. Use just a couple of instances and it\u2019s fine, but be careful adding more than five or six. As ever, test, test, test.\n\nLinks become unclickable, forms unfocusable \n\nThis is a big one. There\u2019s a bug/weirdness with AlphaImageLoader that sometimes prevents interaction with links and forms when a PNG background image is used. This is sometimes reported as a z-index issue, but I don\u2019t believe it is. Rather, it\u2019s an artefact of that weird way the filter gets applied to the document almost outside of the normal render process. \n\nOften this can be solved by giving the links or form elements hasLayout using position: relative; where possible. However, this doesn\u2019t always work and the non-interaction problem cannot always be solved. You may find yourself having to go back to the drawing board.\n\nSidestepping the danger zones\n\nFrankly, it\u2019s pretty bad news if you design a site, have that design signed off by your client, build it and then find out only at the end (because you don\u2019t know what might trigger a problem) that your search field can\u2019t be focused in IE6. That\u2019s an absolute nightmare, and whilst it\u2019s not likely to happen, it\u2019s possible that it might. It\u2019s happened to me. So what can you do?\n\nThe best approach I\u2019ve found to this scenario is\n\n\n\tIsolate the PNG or PNGs that are causing the problem. Step through the PNGs in your page, commenting them out one by one and retesting. Typically it\u2019ll be the nearest PNG to the problem, so try there first. Keep going until you can click your links or focus your form fields.\n\tThis is where you really need luck on your side, because you\u2019re going to have to fake it. This will depend on the design of the site, but some way or other create a replacement GIF or JPEG image that will give you an acceptable result. Then use conditional comments to serve that image to only users of IE older than version 7.\n\n\nA hack, you say? Well, you started it chum.\n\nApplying AlphaImageLoader\n\nBecause the filter property is invalid CSS, the safest pragmatic approach is to apply it selectively with JavaScript for only Internet Explorer versions 5.5 and 6. This helps ensure that by default you\u2019re serving standard CSS to browsers that support both the CSS and PNG standards correct, and then selectively patching up only the browsers that need it. \n\nSeveral years ago, Aaron Boodman wrote and released a script called sleight for doing just that. However, sleight dealt only with images in the page, and not background images applied with CSS. Building on top of Aaron\u2019s work, I hacked sleight and came up with bgsleight for applying the filter to background images instead. That was in 2003, and over the years I\u2019ve made a couple of improvements here and there to keep it ticking over and to resolve conflicts between sleight and bgsleight when used together. However, with alpha channel PNGs becoming much more widespread, it\u2019s time for a new version.\n\nIntroducing SuperSleight\n\nSuperSleight adds a number of new and useful features that have come from the day-to-day needs of working with PNGs.\n\n\n\tWorks with both inline and background images, replacing the need for both sleight and bgsleight\n\tWill automatically apply position: relative to links and form fields if they don\u2019t already have position set. (Can be disabled.)\n\tCan be run on the entire document, or just a selected part where you know the PNGs are. This is better for performance.\n\tDetects background images set to no-repeat and sets the scaleMode to crop rather than scale.\n\tCan be re-applied by any other JavaScript in the page \u2013 useful if new content has been loaded by an Ajax request.\n\n\n Download SuperSleight \n\nImplementation\n\nGetting SuperSleight running on a page is quite straightforward, you just need to link the supplied JavaScript file (or the minified version if you prefer) into your document inside conditional comments so that it is delivered to only Internet Explorer 6 or older.\n\n\n\nSupplied with the JavaScript is a simple transparent GIF file. The script replaces the existing PNG with this before re-layering the PNG over the top using AlphaImageLoaded. You can change the name or path of the image in the top of the JavaScript file, where you\u2019ll also find the option to turn off the adding of position: relative to links and fields if you don\u2019t want that.\n\nThe script is kicked off with a call to supersleight.init() at the bottom. The scope of the script can be limited to just one part of the page by passing an ID of an element to supersleight.limitTo(). And that\u2019s all there is to it.\n\nUpdate March 2008: a version of this script as a jQuery plugin is also now available.", "year": "2007", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2007-12-01T00:00:00+00:00", "url": "https://24ways.org/2007/supersleight-transparent-png-in-ie6/", "topic": "code"} {"rowid": 220, "title": "Finding Your Way with Static Maps", "contents": "Since the introduction of the Google Maps service in 2005, online maps have taken off in a way not really possible before the invention of slippy map interaction. Although quickly followed by a plethora of similar services from both commercial and non-commercial parties, Google\u2019s first-mover advantage, and easy-to-use developer API saw Google Maps become pretty much the de facto mapping service.\n\n\n\nIt\u2019s now so easy to add a map to a web page, there\u2019s no reason not to. Dropping an iframe map into your page is as simple as embedding a YouTube video.\n\nBut there\u2019s one crucial drawback to both the solution Google provides for you to drop into your page and the code developers typically implement themselves \u2013 they don\u2019t work without JavaScript.\n\nA bit about JavaScript\n\nBack in October of this year, The Yahoo! Developer Network blog ran some tests to measure how many visitors to the Yahoo! home page didn\u2019t have JavaScript available or enabled in their browser. It\u2019s an interesting test when you consider that the audience for the Yahoo! home page (one of the most visited pages on the web) represents about as mainstream a sample as you\u2019ll find. If there\u2019s any such thing as an \u2018average Web user\u2019 then this is them.\n\nThe results surprised me. It varied from region to region, but at most just two per cent of visitors didn\u2019t have JavaScript running. To be honest, I was expecting it to be higher, but this quote from the article caught my attention:\n\n\n\tWhile the percentage of visitors with JavaScript disabled seems like a low number, keep in mind that small percentages of big numbers are also big numbers.\n\n\nThat\u2019s right, of course, and it got me thinking about what that two per cent means. For many sites, two per cent is the number of visitors using the Opera web browser, using IE6, or using Mobile Safari. \n\nSo, although a small percentage of the total, users without JavaScript can\u2019t just be forgotten about, and catering for them is at the very heart of how the web is supposed to work.\n\nStarting with content in HTML, we layer on presentation with CSS and then enhance interactivity with JavaScript. If anything fails along the way or the network craps out, or a browser just doesn\u2019t support one of the technologies, the user still gets something they can work with. \n\nIt\u2019s progressive enhancement \u2013 also known as doing our jobs properly.\n\nSorry, wasn\u2019t this about maps?\n\nAs I was saying, the default code Google provides, and the example code it gives to developers (which typically just gets followed \u2018as is\u2019) doesn\u2019t account for users without JavaScript. No JavaScript, no content.\n\nWhen adding the ability to publish maps to our small content management system Perch, I didn\u2019t want to provide a solution that only worked with JavaScript. I had to go looking for a way to provide maps without JavaScript, too.\n\nThere\u2019s a simple solution, fortunately, in the form of static map tiles. All the various slippy map services use a JavaScript interface on top of what are basically rendered map image tiles. Dragging the map loads in more image tiles in the direction you want to view. If you\u2019ve used a slippy map on a slow connection, you\u2019ll be familiar with seeing these tiles load in one by one.\n\nThe Static Map API\n\nThe good news is that these tiles (or tiles just like them) can be used as regular images on your site. Google has a Static Map API which not only gives you a handy interface to retrieve a tile for the exact area you need, but also allows you to place pins, and zoom and centre the tile so that the image looks just so. \n\nThis means that you can create a static, non-JavaScript version of your slippy map\u2019s initial (or ideal) state to load into your page as a regular image, and then have the JavaScript map hijack the image and make it slippy.\n\nClearly, that\u2019s not going to be a perfect solution for every map\u2019s requirements. It doesn\u2019t allow for panning, zooming or interrogation without JavaScript. However, for the majority of straightforward map uses online, a static map makes a great alternative for those visitors without JavaScript.\n\n\n\nHere\u2019s the how\n\nRetrieving a static map tile is staggeringly easy \u2013 it\u2019s just a case of forming a URL with the correct arguments and then using that as the src of an image tag.\n\n\"Map\n\nAs you can see, there are a few key options that we pass along to the base URL. All of these should be familiar to anyone who\u2019s worked with the JavaScript API.\n\n\n\tcenter determines the point on which the map is centred. This can be latitude and longitude values, or simply an address which is then geocoded.\n\tzoom sets the zoom level.\n\tsize is the pixel dimensions of the image you require.\n\tmaptype can be roadmap, satellite, terrain or hybrid.\n\tmarkers sets one or more pin locations. Markers can be labelled, have different colours, and so on \u2013 there\u2019s quite a lot of control available.\n\tsensor states whether you are using a sensor to determine the user\u2019s location. When just embedding a map in a web page, set this to false.\n\n\nThere are many options, including plotting paths and setting the image format, which can all be found in the straightforward documentation.\n\nAdding to your page\n\nIf you\u2019ve worked with the JavaScript API, you\u2019ll know that it needs a container element which you inject the map into:\n\n
\n\nAll you need to do is put your static image inside that container:\n\n
\n \n
\n\nAnd then, in your JavaScript, find the image and remove it. For example, with jQuery you\u2019d simply use:\n\n$('#map img').remove();\n\nWhy not use a