{"rowid": 314, "title": "Easy Ajax with Prototype", "contents": "There\u2019s little more impressive on the web today than a appropriate touch of Ajax. Used well, Ajax brings a web interface much closer to the experience of a desktop app, and can turn a bear of an task into a pleasurable activity.\n\nBut it\u2019s really hard, right? It involves all the nasty JavaScript that no one ever does often enough to get really good at, and the browser support is patchy, and urgh it\u2019s just so much damn effort. Well, the good news is that \u2013 ta-da \u2013 it doesn\u2019t have to be a headache. But man does it still look impressive. Here\u2019s how to amaze your friends.\n\nIntroducing prototype.js\n\nPrototype is a JavaScript framework by Sam Stephenson designed to help make developing dynamic web apps a whole lot easier. In basic terms, it\u2019s a JavaScript file which you link into your page that then enables you to do cool stuff.\n\nThere\u2019s loads of capability built in, a portion of which covers our beloved Ajax. The whole thing is freely distributable under an MIT-style license, so it\u2019s good to go. What a nice man that Mr Stephenson is \u2013 friends, let us raise a hearty cup of mulled wine to his good name. Cheers! sluurrrrp.\n\nFirst step is to download the latest Prototype and put it somewhere safe. I suggest underneath the Christmas tree.\n\nCutting to the chase\n\nBefore I go on and set up an example of how to use this, let\u2019s just get to the crux. Here\u2019s how Prototype enables you to make a simple Ajax call and dump the results back to the page:\n\nvar url = 'myscript.php';\nvar pars = 'foo=bar';\nvar target = 'output-div';\t\nvar myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars});\n\nThis snippet of JavaScript does a GET to myscript.php, with the parameter foo=bar, and when a result is returned, it places it inside the element with the ID output-div on your page.\n\nKnocking up a basic example\n\nSo to get this show on the road, there are three files we need to set up in our site alongside prototype.js. Obviously we need a basic HTML page with prototype.js linked in. This is the page the user interacts with. Secondly, we need our own JavaScript file for the glue between the interface and the stuff Prototype is doing. Lastly, we need the page (a PHP script in my case) that the Ajax is going to make its call too.\n\nSo, to that basic HTML page for the user to interact with. Here\u2019s one I found whilst out carol singing:\n\n\n\n\n \n Easy Ajax\n \n \n \n\n
\n
\n \n \n \n
\n
\n
\n\n\n\nAs you can see, I\u2019ve linked in prototype.js, and also a file called ajax.js, which is where we\u2019ll be putting our glue. (Careful where you leave your glue, kids.)\n\nOur basic example is just going to take a name and then echo it back in the form of a seasonal greeting. There\u2019s a form with an input field for a name, and crucially a DIV (greeting) for the result of our call. You\u2019ll also notice that the form has a submit button \u2013 this is so that it can function as a regular form when no JavaScript is available. It\u2019s important not to get carried away and forget the basics of accessibility.\n\nMeanwhile, back at the server\n\nSo we need a script at the server which is going to take input from the Ajax call and return some output. This is normally where you\u2019d hook into a database and do whatever transaction you need to before returning a result. To keep this as simple as possible, all this example here will do is take the name the user has given and add it to a greeting message. Not exactly Web 2-point-HoHoHo, but there you have it.\n\nHere\u2019s a quick PHP script \u2013 greeting.php \u2013 that Santa brought me early.\n\nSeason's Greetings, $the_name!

\";\n?>\n\nYou\u2019ll perhaps want to do something a little more complex within your own projects. Just sayin\u2019.\n\nGluing it all together\n\nInside our ajax.js file, we need to hook this all together. We\u2019re going to take advantage of some of the handy listener routines and such that Prototype also makes available. The first task is to attach a listener to set the scene once the window has loaded. He\u2019s how we attach an onload event to the window object and get it to call a function named init():\n\nEvent.observe(window, 'load', init, false);\n\nNow we create our init() function to do our evil bidding. Its first job of the day is to hide the submit button for those with JavaScript enabled. After that, it attaches a listener to watch for the user typing in the name field.\n\nfunction init(){\n $('greeting-submit').style.display = 'none';\n Event.observe('greeting-name', 'keyup', greet, false);\n}\n\nAs you can see, this is going to make a call to a function called greet() onkeyup in the greeting-name field. That function looks like this:\n\nfunction greet(){\n var url = 'greeting.php';\n var pars = 'greeting-name='+escape($F('greeting-name'));\n var target = 'greeting';\n var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars});\n}\n\nThe key points to note here are that any user input needs to be escaped before putting into the parameters so that it\u2019s URL-ready. The target is the ID of the element on the page (a DIV in our case) which will be the recipient of the output from the Ajax call.\n\nThat\u2019s it\n\nNo, seriously. That\u2019s everything. Try the example. Amaze your friends with your 1337 Ajax sk1llz.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-01T00:00:00+00:00", "url": "https://24ways.org/2005/easy-ajax-with-prototype/", "topic": "code"} {"rowid": 336, "title": "Practical Microformats with hCard", "contents": "You\u2019ve probably heard about microformats over the last few months. You may have even read the easily digestible introduction at Digital Web Magazine, but perhaps you\u2019ve not found time to actually implement much yet. That\u2019s understandable, as it can sometimes be difficult to see exactly what you\u2019re adding by applying a microformat to a page. Sure, you\u2019re semantically enhancing the information you\u2019re marking up, and the Semantic Web is a great idea and all, but what benefit is it right now, today? \n\nWell, the answer to that question is simple: you\u2019re adding lots of information that can be and is being used on the web here and now. The big ongoing battle amongst the big web companies if one of territory over information. Everyone\u2019s grasping for as much data as possible. Some of that information many of us are cautious to give away, but a lot of is happy to be freely available. Of the data you\u2019re giving away, it makes sense to give it as much meaning as possible, thus enabling anyone from your friends and family to the giant search company down the road to make the most of it.\n\nOk, enough of the waffle, let\u2019s get working.\n\nIntroducing hCard\n\nYou may have come across hCard. It\u2019s a microformat for describing contact information (or really address book information) from within your HTML. It\u2019s based on the vCard format, which is the format the contacts/address book program on your computer uses. All the usual fields are available \u2013 name, address, town, website, email, you name it.\n\nIf you\u2019re running Firefox and Greasemonkey (or if you can, just to try this out), install this user script. What it does is look for instances of the hCard microformat in a page, and then add in a link to pass any hCards it finds to a web service which will convert it to a vCard. Take a look at the About the author box at the bottom of this article. It\u2019s a hCard, so you should be able to click the icon the user script inserts and add me to your Outlook contacts or OS X Address Book with just a click.\n\nSo microformats are useful after all. Free microformats all round!\n\nImplementing hCard\n\nThis is the really easy bit. All the hCard microformat is, is a bunch of predefined class names that you apply to the markup you\u2019ve probably already got around your contact information. Let\u2019s take the example of the About the author box from this article. Here\u2019s how the markup looks without hCard:\n\n
\n

About the author

\n

Drew McLellan is a web developer, author and no-good swindler from \n just outside London, England. At the \n Web Standards Project he works \n on press, strategy and tools. Drew keeps a \n personal weblog covering web \n development issues and themes.

\n
\n\nThis is a really simple example because there\u2019s only two key bits of address book information here:- my name and my website address. Let\u2019s push it a little and say that the Web Standards Project is the organisation I work for \u2013 that gives us Name, Company and URL.\n\nTo kick off an hCard, you need a containing object with a class of vcard. The div I already have with a class of bio is perfect for this \u2013 all it needs to do is contain the rest of the contact information.\n\nThe next thing to identify is my name. hCard uses a class of fn (meaning Full Name) to identify a name. As is this case there\u2019s no element surrounding my name, we can just use a span. These changes give us:\n\n
\n

About the author

\n

Drew McLellan is a web developer...\n\nThe two remaining items are my URL and the organisation I belong to. The class names designated for those are url and org respectively. As both of those items are links in this case, I can apply the classes to those links. So here\u2019s the finished hCard.\n\n

\n

About the author

\n

Drew McLellan is a web developer, author and \n no-good swindler from just outside London, England. \n At the Web Standards Project \n he works on press, strategy and tools. Drew keeps a \n personal weblog covering web \n development issues and themes.

\n
\n\nOK, that was easy. By just applying a few easy class names to the HTML I was already publishing, I\u2019ve implemented an hCard that right now anyone with Greasemonkey can click to add to their address book, that Google and Yahoo! and whoever else can index and work out important things like which websites are associated with my name if they so choose (and boy, will they so choose), and in the future who knows what. In terms of effort, practically nil.\n\nWhere next?\n\nSo that was a trivial example, but to be honest it doesn\u2019t really get much more complex even with the most pernickety permutations. Because hCard is based on vCard (a mature and well thought-out standard), it\u2019s all tried and tested. Here\u2019s some good next steps.\n\n\n\tPlay with the hCard Creator\n\tTake a deep breath and read the spec\n\tStart implementing hCard as you go on your own projects \u2013 it takes very little time\n\n\nhCard is just one of an ever-increasing number of microformats. If this tickled your fancy, I suggest subscribing to the microformats site in your RSS reader to keep in touch with new developments.\n\nWhat\u2019s the take-away?\n\nThe take-away is this. They may sound like just more Web 2-point-HoHoHo hype, but microformats are a well thought-out, and easy to implement way of adding greater depth to the information you publish online. They have some nice benefits right away \u2013 certainly at geek-level \u2013 but in the longer term they become much more significant. We\u2019ve been at this long enough to know that the web has a long, long memory and that what you publish today will likely be around for years. But putting the extra depth of meaning into your documents now you can help guard that they\u2019ll continue to be useful in the future, and not just a bunch of flat ASCII.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-06T00:00:00+00:00", "url": "https://24ways.org/2005/practical-microformats-with-hcard/", "topic": "code"} {"rowid": 318, "title": "Auto-Selecting Navigation", "contents": "In the article Centered Tabs with CSS Ethan laid out a tabbed navigation system which can be centred on the page. A frequent requirement for any tab-based navigation is to be able to visually represent the currently selected tab in some way.\n\nIf you\u2019re using a server-side language such as PHP, it\u2019s quite easy to write something like class=\"selected\" into your markup, but it can be even simpler than that.\n\nLet\u2019s take the navigation div from Ethan\u2019s article as an example.\n\n
\n \n
\n\nAs you can see we have a standard unordered list which is then styled with CSS to look like tabs. By giving each tab a class which describes it\u2019s logical section of the site, if we were to then apply a class to the body tag of each page showing the same, we could write a clever CSS selector to highlight the correct tab on any given page. \n\nSound complicated? Well, it\u2019s not a trivial concept, but actually applying it is dead simple.\n\nModifying the markup\n\nFirst thing is to place a class name on each li in the list:\n\n
\n \n
\n\nThen, on each page of your site, apply the a matching class to the body tag to indicate which section of the site that page is in. For example, on your About page:\n\n...\n\nWriting the CSS selector\n\nYou can now write a single CSS rule to match the selected tab on any given page. The logic is that you want to match the \u2018about\u2019 tab on the \u2018about\u2019 page and the \u2018products\u2019 tab on the \u2018products\u2019 page, so the selector looks like this:\n\nbody.home #navigation li.home,\n body.about #navigation li.about,\n body.work #navigation li.work,\n body.products #navigation li.products,\n body.contact #navigation li.contact{\n ... whatever styles you need to show the tab selected ...\n } \n\nSo all you need to do when you create a new page in your site is to apply a class to the body tag to say which section it\u2019s in. The CSS will do the rest for you \u2013 without any server-side help.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-10T00:00:00+00:00", "url": "https://24ways.org/2005/auto-selecting-navigation/", "topic": "code"} {"rowid": 315, "title": "Edit-in-Place with Ajax", "contents": "Back on day one we looked at using the Prototype library to take all the hard work out of making a simple Ajax call. While that was fun and all, it didn\u2019t go that far towards implementing something really practical. We dipped our toes in, but haven\u2019t learned to swim yet.\n\nSo here is swimming lesson number one. Anyone who\u2019s used Flickr to publish their photos will be familiar with the edit-in-place system used for quickly amending titles and descriptions on photographs. Hovering over an item turns its background yellow to indicate it is editable. A simple click loads the text into an edit box, right there on the page.\n\n\n\nPrototype includes all sorts of useful methods to help reproduce something like this for our own projects. As well as the simple Ajax GETs we learned how to do last time, we can also do POSTs (which we\u2019ll need here) and a whole bunch of manipulations to the user interface \u2013 all through simple library calls. Here\u2019s what we\u2019re building, so let\u2019s do it.\n\nGetting Started\n\nThere are two major components to this process; the user interface manipulation and the Ajax call itself. Our set-up is much the same as last time (you may wish to read the first article if you\u2019ve not already done so). We have a basic HTML page which links in the prototype.js file and our own editinplace.js. Here\u2019s what Santa dropped down my chimney: \n\n\n \n \n \n Edit-in-Place with Ajax\n \n \n \n \n\n

Edit-in-place

\n

Dashing through the snow on a one horse open sleigh.

\n \n \n\nSo that\u2019s our page. The editable item is going to be the

called desc. The process goes something like this:\n\n\n\tHighlight the area onMouseOver\n\tClear the highlight onMouseOut\n\tIf the user clicks, hide the area and replace with a ';\n\n var button = ' OR \n

';\n\n new Insertion.After(obj, textarea+button);\n\n Event.observe(obj.id+'_save', 'click', function(){saveChanges(obj)}, false);\n Event.observe(obj.id+'_cancel', 'click', function(){cleanUp(obj)}, false);\n\n }\n\nThe first thing to do is to hide the object. Prototype comes to the rescue with Element.hide() (and of course, Element.show() too). Following that, we build up the textarea and buttons as a string, and then use Insertion.After() to place our new editor underneath the (now hidden) editable object.\n\nThe last thing to do before we leave the user to edit is it attach listeners to the Save and Cancel buttons to call either the saveChanges() function, or to cleanUp() after a cancel.\n\nIn the event of a cancel, we can clean up behind ourselves like so:\n\nfunction cleanUp(obj, keepEditable){\n Element.remove(obj.id+'_editor');\n Element.show(obj);\n if (!keepEditable) showAsEditable(obj, true);\n }\n\nSaving the Changes\n\nThis is where all the Ajax fun occurs. Whilst the previous article introduced Ajax.Updater() for simple Ajax calls, in this case we need a little bit more control over what happens once the response is received. For this purpose, Ajax.Request() is perfect. We can use the onSuccess and onFailure parameters to register functions to handle the response.\n\nfunction saveChanges(obj){\n var new_content = escape($F(obj.id+'_edit'));\n\n obj.innerHTML = \"Saving...\";\n cleanUp(obj, true);\n\n var success = function(t){editComplete(t, obj);}\n var failure = function(t){editFailed(t, obj);}\n\n var url = 'edit.php';\n var pars = 'id=' + obj.id + '&content=' + new_content;\n var myAjax = new Ajax.Request(url, {method:'post',\n postBody:pars, onSuccess:success, onFailure:failure});\n }\n\n function editComplete(t, obj){\n obj.innerHTML = t.responseText;\n showAsEditable(obj, true);\n }\n\n function editFailed(t, obj){\n obj.innerHTML = 'Sorry, the update failed.';\n cleanUp(obj);\n }\n\nAs you can see, we first grab in the contents of the textarea into the variable new_content. We then remove the editor, set the content of the original object to \u201cSaving\u2026\u201d to show that an update is occurring, and make the Ajax POST.\n\nIf the Ajax fails, editFailed() sets the contents of the object to \u201cSorry, the update failed.\u201d Admittedly, that\u2019s not a very helpful way to handle the error but I have to limit the scope of this article somewhere. It might be a good idea to stow away the original contents of the object (obj.preUpdate = obj.innerHTML) for later retrieval before setting the content to \u201cSaving\u2026\u201d. No one likes a failure \u2013 especially a messy one.\n\nIf the Ajax call is successful, the server-side script returns the edited content, which we then place back inside the object from editComplete, and tidy up.\n\nMeanwhile, back at the server\n\nThe missing piece of the puzzle is the server-side script for committing the changes to your database. Obviously, any solution I provide here is not going to fit your particular application. For the purposes of getting a functional demo going, here\u2019s what I have in PHP.\n\n\n\nNot exactly rocket science is it? I\u2019m just catching the content item from the POST and echoing it back. For your application to be useful, however, you\u2019ll need to know exactly which record you should be updating. I\u2019m passing in the ID of my
, which is not a fat lot of use. You can modify saveChanges() to post back whatever information your app needs to know in order to process the update.\n\nYou should also check the user\u2019s credentials to make sure they have permission to edit whatever it is they\u2019re editing. Basically the same rules apply as with any script in your application.\n\nLimitations\n\nThere are a few bits and bobs that in an ideal world I would tidy up. The first is the error handling, as I\u2019ve already mentioned. The second is that from an idealistic standpoint, I\u2019d rather not be using innerHTML. However, the reality is that it\u2019s presently the most efficient way of making large changes to the document. If you\u2019re serving as XML, remember that you\u2019ll need to replace these with proper DOM nodes.\n\nIt\u2019s also important to note that it\u2019s quite difficult to make something like this universally accessible. Whenever you start updating large chunks of a document based on user interaction, a lot of non-traditional devices don\u2019t cope well. The benefit of this technique, though, is that if JavaScript is unavailable none of the functionality gets implemented at all \u2013 it fails silently. It is for this reason that this shouldn\u2019t be used as a complete replacement for a traditional, universally accessible edit form. It\u2019s a great time-saver for those with the ability to use it, but it\u2019s no replacement.\n\nSee it in action\n\nI\u2019ve put together an example page using the inert PHP script above. That is to say, your edits aren\u2019t committed to a database, so the example is reset when the page is reloaded.", "year": "2005", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2005-12-23T00:00:00+00:00", "url": "https://24ways.org/2005/edit-in-place-with-ajax/", "topic": "code"} {"rowid": 132, "title": "Tasty Text Trimmer", "contents": "In most cases, when designing a user interface it\u2019s best to make a decision about how data is best displayed and stick with it. Failing to make a decision ultimately leads to too many user options, which in turn can be taxing on the poor old user. \n\nUnder some circumstances, however, it\u2019s good to give the user freedom in customising their workspace. One good example of this is the \u2018Article Length\u2019 tool in Apple\u2019s Safari RSS reader. Sliding a slider left of right dynamically changes the length of each article shown. It\u2019s that kind of awesomey magic stuff that\u2019s enough to keep you from sleeping. Let\u2019s build one.\n\nThe Setup\n\nLet\u2019s take a page that has lots of long text items, a bit like a news page or like Safari\u2019s RSS items view. If we were to attach a class name to each element we wanted to resize, that would give us something to hook onto from the JavaScript. \n\nExample 1: The basic page.\n\nAs you can see, I\u2019ve wrapped my items in a DIV and added a class name of chunk to them. It\u2019s these chunks that we\u2019ll be finding with the JavaScript. Speaking of which \u2026\n\nOur Core Functions\n\nThere are two main tasks that need performing in our script. The first is to find the chunks we\u2019re going to be resizing and store their original contents away somewhere safe. We\u2019ll need this so that if we trim the text down we\u2019ll know what it was if the user decides they want it back again. We\u2019ll call this loadChunks. \n\nvar loadChunks = function(){\n\tvar everything = document.getElementsByTagName('*');\n\tvar i, l;\n\tchunks\t= [];\n\tfor (i=0, l=everything.length; i -1){\n\t\t\tchunks.push({\n\t\t\t\tref: everything[i],\n\t\t\t\toriginal: everything[i].innerHTML\n\t\t\t});\n\t\t}\n\t}\n};\n\nThe variable chunks is stored outside of this function so that we can access it from our next core function, which is doTrim.\n\nvar doTrim = function(interval) {\n\tif (!chunks) loadChunks();\n\tvar i, l;\n\tfor (i=0, l=chunks.length; iPlay the animation\n\nThis could be rewritten by removing the onclick attribute, and instead doing the following from within your JavaScript.\n\ndocument.getElementById('anim-link').onclick = playAnimation;\n\nIt\u2019s all in the timing\n\nOf course, it\u2019s never quite that easy. To be able to attach that onclick, the element you\u2019re targeting has to exist in the page, and the page has to have finished loading for the DOM to be available. This is where the onload event is handy, as it fires once everything has finished loading. Common practise is to have a function called something like init() (short for initialise) that sets up all these event handlers as soon as the page is ready.\n\nBack in the day we would have used the onload attibute on the element to do this, but of course what we really want is:\n\nwindow.onload = init;\n\nAs an interesting side note, we\u2019re using init here rather than init() so that the function is assigned to the event. If we used the parentheses, the init function would have been run at that moment, and the result of running the function (rather than the function itself) would be assigned to the event. Subtle, but important.\n\nAs is becoming apparent, nothing is ever simple, and we can\u2019t just go around assigning our initialisation function to window.onload. What if we\u2019re using other scripts in the page that might also want to listen out for that event? Whichever script got there last would overwrite everything that came before it. To manage this, we need a script that checks for any existing event handlers, and adds the new handler to it. Most of the JavaScript libraries have their own systems for doing this. If you\u2019re not using a library, Simon Willison has a good stand-alone example\n\nfunction addLoadEvent(func) {\n\tvar oldonload = window.onload;\n\tif (typeof window.onload != 'function') {\n\t\twindow.onload = func;\n\t} else {\n\t\twindow.onload = function() {\n\t\t\tif (oldonload) {\n\t\t\t\toldonload();\n\t\t\t}\n\t\t\tfunc();\n\t\t}\n\t}\n}\n\nObviously this is just a toe in the events model\u2019s complex waters. Some good further reading is PPK\u2019s Introduction to Events.\n\nCarving out your own space\n\nAnother problem that rears its ugly head when combining multiple scripts on a single page is that of making sure that the scripts don\u2019t conflict. One big part of that is ensuring that no two scripts are trying to create functions or variables with the same names. Reusing a name in JavaScript just over-writes whatever was there before it.\n\nWhen you create a function in JavaScript, you\u2019ll be familiar with doing something like this.\n\nfunction foo() {\n\t... goodness ...\n}\n\nThis is actually just creating a variable called foo and assigning a function to it. It\u2019s essentially the same as the following.\n\nvar foo = function() {\n\t... goodness ...\n}\n\nThis name foo is by default created in what\u2019s known as the \u2018global namespace\u2019 \u2013 the general pool of variables within the page. You can quickly see that if two scripts use foo as a name, they will conflict because they\u2019re both creating those variables in the global namespace.\n\nA good solution to this problem is to add just one name into the global namespace, make that one item either a function or an object, and then add everything else you need inside that. This takes advantage of JavaScript\u2019s variable scoping to contain you mess and stop it interfering with anyone else.\n\nCreating An Object\n\nSay I was wanting to write a bunch of functions specifically for using on a site called \u2018Foo Online\u2019. I\u2019d want to create my own object with a name I think is likely to be unique to me.\n\nvar FOOONLINE = {};\n\nWe can then start assigning functions are variables to it like so:\n\nFOOONLINE.message = 'Merry Christmas!';\nFOOONLINE.showMessage = function() {\n\talert(this.message);\n};\n\nCalling FOOONLINE.showMessage() in this example would alert out our seasonal greeting. The exact same thing could also be expressed in the following way, using the object literal syntax.\n\nvar FOOONLINE = {\n\tmessage: 'Merry Christmas!',\n\tshowMessage: function() {\n\t\talert(this.message);\n\t}\n};\n\nCreating A Function to Create An Object\n\nWe can extend this idea bit further by using a function that we run in place to return an object. The end result is the same, but this time we can use closures to give us something like private methods and properties of our object.\n\nvar FOOONLINE = function(){\n\tvar message = 'Merry Christmas!';\n\treturn {\n\t\tshowMessage: function(){\n\t\t\talert(message);\n\t\t}\n\t}\n}();\n\nThere are two important things to note here. The first is the parentheses at the end of line 10. Just as we saw earlier, this runs the function in place and causes its result to be assigned. In this case the result of our function is the object that is returned at line 4.\n\nThe second important thing to note is the use of the var keyword on line 2. This ensures that the message variable is created inside the scope of the function and not in the global namespace. Because of the way closure works (which if you\u2019re not familiar with, just suspend your disbelief for a moment) that message variable is visible to everything inside the function but not outside. Trying to read FOOONLINE.message from the page would return undefined.\n\nThis is useful for simulating the concept of private class methods and properties that exist in other programming languages. I like to take the approach of making everything private unless I know it\u2019s going to be needed from outside, as it makes the interface into your code a lot clearer for someone else to read. \n\nAll Change, Please\n\nSo that was just a whistle-stop tour of a couple of the bigger changes that can help to make your scripts better page citizens. I hope it makes useful Sunday reading, but obviously this is only the tip of the iceberg when it comes to designing modular, reusable code.\n\nFor some, this is all familiar ground already. If that\u2019s the case, I encourage you to perhaps submit a comment with any useful resources you\u2019ve found that might help others get up to speed. Ultimately it\u2019s in all of our interests to make sure that all our JavaScript interoperates well \u2013 share your tips.", "year": "2006", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2006-12-10T00:00:00+00:00", "url": "https://24ways.org/2006/writing-responsible-javascript/", "topic": "code"} {"rowid": 165, "title": "Transparent PNGs in Internet Explorer 6", "contents": "Newer breeds of browser such as Firefox and Safari have offered support for PNG images with full alpha channel transparency for a few years. With the use of hacks, support has been available in Internet Explorer 5.5 and 6, but the hacks are non-ideal and have been tricky to use. With IE7 winning masses of users from earlier versions over the last year, full PNG alpha-channel transparency is becoming more of a reality for day-to-day use.\n\nHowever, there are still numbers of IE6 users out there who we can\u2019t leave out in the cold this Christmas, so in this article I\u2019m going to look what we can do to support IE6 users whilst taking full advantage of transparency for the majority of a site\u2019s visitors.\n\nSo what\u2019s alpha channel transparency?\n\nCast your minds back to the Ghost of Christmas Past, the humble GIF. Images in GIF format offer transparency, but that transparency is either on or off for any given pixel. Each pixel\u2019s either fully transparent, or a solid colour. In GIF, transparency is effectively just a special colour you can chose for a pixel.\n\nThe PNG format tackles the problem rather differently. As well as having any colour you chose, each pixel also carries a separate channel of information detailing how transparent it is. This alpha channel enables a pixel to be fully transparent, fully opaque, or critically, any step in between.\n\nThis enables designers to produce images that can have, for example, soft edges without any of the \u2018halo effect\u2019 traditionally associated with GIF transparency. If you\u2019ve ever worked on a site that has different colour schemes and therefore requires multiple versions of each graphic against a different colour, you\u2019ll immediately see the benefit. \n\nWhat\u2019s perhaps more interesting than that, however, is the extra creative freedom this gives designers in creating beautiful sites that can remain web-like in their ability to adjust, scale and reflow.\n\nThe Internet Explorer problem\n\nUp until IE7, there has been no fully native support for PNG alpha channel transparency in Internet Explorer. However, since IE5.5 there has been some support in the form of proprietary filter called the AlphaImageLoader. Internet Explorer filters can be applied directly in your CSS (for both inline and background images), or by setting the same CSS property with JavaScript. \n\nCSS:\n\nimg {\n\tfilter: progid:DXImageTransform.Microsoft.AlphaImageLoader(...);\n}\n\nJavaScript:\n\nimg.style.filter = \"progid:DXImageTransform.Microsoft.AlphaImageLoader(...)\";\n\nThat may sound like a problem solved, but all is not as it may appear. Firstly, as you may realise, there\u2019s no CSS property called filter in the W3C CSS spec. It\u2019s a proprietary extension added by Microsoft that could potentially cause other browsers to reject your entire CSS rule. \n\nSecondly, AlphaImageLoader does not magically add full PNG transparency support so that a PNG in the page will just start working. Instead, when applied to an element in the page, it draws a new rendering surface in the same space that element occupies and loads a PNG into it. If that sounds weird, it\u2019s because that\u2019s precisely what it is. However, by and large the result is that PNGs with an alpha channel can be accommodated.\n\nThe pitfalls\n\nSo, whilst support for PNG transparency in IE5.5 and 6 is possible, it\u2019s not without its problems.\n\nBackground images cannot be positioned or repeated\n\nThe AlphaImageLoader does work for background images, but only for the simplest of cases. If your design requires the image to be tiled (background-repeat) or positioned (background-position) you\u2019re out of luck. The AlphaImageLoader allows you to set a sizingMethod to either crop the image (if necessary) or to scale it to fit. Not massively useful, but something at least.\n\nDelayed loading and resource use\n\nThe AlphaImageLoader can be quite slow to load, and appears to consume more resources than a standard image when applied. Typically, you\u2019d need to add thousands of GIFs or JPEGs to a page before you saw any noticeable impact on the browser, but with the AlphaImageLoader filter applied Internet Explorer can become sluggish after just a handful of alpha channel PNGs.\n\nThe other noticeable effect is that as more instances of the AlphaImageLoader are applied, the longer it takes to render the PNGs with their transparency. The user sees the PNG load in its original non-supported state (with black or grey areas where transparency should be) before one by one the filter kicks in and makes them properly transparent.\n\nBoth the issue of sluggish behaviour and delayed load only really manifest themselves with volume and size of image. Use just a couple of instances and it\u2019s fine, but be careful adding more than five or six. As ever, test, test, test.\n\nLinks become unclickable, forms unfocusable \n\nThis is a big one. There\u2019s a bug/weirdness with AlphaImageLoader that sometimes prevents interaction with links and forms when a PNG background image is used. This is sometimes reported as a z-index issue, but I don\u2019t believe it is. Rather, it\u2019s an artefact of that weird way the filter gets applied to the document almost outside of the normal render process. \n\nOften this can be solved by giving the links or form elements hasLayout using position: relative; where possible. However, this doesn\u2019t always work and the non-interaction problem cannot always be solved. You may find yourself having to go back to the drawing board.\n\nSidestepping the danger zones\n\nFrankly, it\u2019s pretty bad news if you design a site, have that design signed off by your client, build it and then find out only at the end (because you don\u2019t know what might trigger a problem) that your search field can\u2019t be focused in IE6. That\u2019s an absolute nightmare, and whilst it\u2019s not likely to happen, it\u2019s possible that it might. It\u2019s happened to me. So what can you do?\n\nThe best approach I\u2019ve found to this scenario is\n\n\n\tIsolate the PNG or PNGs that are causing the problem. Step through the PNGs in your page, commenting them out one by one and retesting. Typically it\u2019ll be the nearest PNG to the problem, so try there first. Keep going until you can click your links or focus your form fields.\n\tThis is where you really need luck on your side, because you\u2019re going to have to fake it. This will depend on the design of the site, but some way or other create a replacement GIF or JPEG image that will give you an acceptable result. Then use conditional comments to serve that image to only users of IE older than version 7.\n\n\nA hack, you say? Well, you started it chum.\n\nApplying AlphaImageLoader\n\nBecause the filter property is invalid CSS, the safest pragmatic approach is to apply it selectively with JavaScript for only Internet Explorer versions 5.5 and 6. This helps ensure that by default you\u2019re serving standard CSS to browsers that support both the CSS and PNG standards correct, and then selectively patching up only the browsers that need it. \n\nSeveral years ago, Aaron Boodman wrote and released a script called sleight for doing just that. However, sleight dealt only with images in the page, and not background images applied with CSS. Building on top of Aaron\u2019s work, I hacked sleight and came up with bgsleight for applying the filter to background images instead. That was in 2003, and over the years I\u2019ve made a couple of improvements here and there to keep it ticking over and to resolve conflicts between sleight and bgsleight when used together. However, with alpha channel PNGs becoming much more widespread, it\u2019s time for a new version.\n\nIntroducing SuperSleight\n\nSuperSleight adds a number of new and useful features that have come from the day-to-day needs of working with PNGs.\n\n\n\tWorks with both inline and background images, replacing the need for both sleight and bgsleight\n\tWill automatically apply position: relative to links and form fields if they don\u2019t already have position set. (Can be disabled.)\n\tCan be run on the entire document, or just a selected part where you know the PNGs are. This is better for performance.\n\tDetects background images set to no-repeat and sets the scaleMode to crop rather than scale.\n\tCan be re-applied by any other JavaScript in the page \u2013 useful if new content has been loaded by an Ajax request.\n\n\n Download SuperSleight \n\nImplementation\n\nGetting SuperSleight running on a page is quite straightforward, you just need to link the supplied JavaScript file (or the minified version if you prefer) into your document inside conditional comments so that it is delivered to only Internet Explorer 6 or older.\n\n\n\nSupplied with the JavaScript is a simple transparent GIF file. The script replaces the existing PNG with this before re-layering the PNG over the top using AlphaImageLoaded. You can change the name or path of the image in the top of the JavaScript file, where you\u2019ll also find the option to turn off the adding of position: relative to links and fields if you don\u2019t want that.\n\nThe script is kicked off with a call to supersleight.init() at the bottom. The scope of the script can be limited to just one part of the page by passing an ID of an element to supersleight.limitTo(). And that\u2019s all there is to it.\n\nUpdate March 2008: a version of this script as a jQuery plugin is also now available.", "year": "2007", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2007-12-01T00:00:00+00:00", "url": "https://24ways.org/2007/supersleight-transparent-png-in-ie6/", "topic": "code"} {"rowid": 166, "title": "Performance On A Shoe String", "contents": "Back in the summer, I happened to notice the official Wimbledon All England Tennis Club site had jumped to the top of Alexa\u2019s Movers & Shakers list \u2014 a list that tracks sites that have had the biggest upturn or downturn in traffic. The lawn tennis championships were underway, and so traffic had leapt from almost nothing to crazy-busy in a no time at all. \n\nMany sites have similar peaks in traffic, especially when they\u2019re based around scheduled events. No one cares about the site for most of the year, and then all of a sudden \u2013 wham! \u2013 things start getting warm in the data centre. Whilst the thought of chestnuts roasting on an open server has a certain appeal, it\u2019s less attractive if you care about your site being available to visitors. Take a look at this Alexa traffic graph showing traffic patterns for superbowl.com at the beginning of each year, and wimbledon.org in the month of July.\n\nTraffic graph from Alexa.com \n\nWhilst not on the same scale or with such dramatic peaks, we have a similar pattern of traffic here at 24ways.org. Over the last three years we\u2019ve seen a dramatic pick up in traffic over the month of December (as would be expected) and then a much lower, although steady load throughout the year. What we do have, however, is the luxury of knowing when the peaks will be. For a normal site, be that a blog, small scale web app, or even a small corporate site, you often just cannot predict when you might get slashdotted, end up on the front page of Digg or linked to from a similarly high-profile site. You just don\u2019t know when the peaks will be.\n\nIf you\u2019re a big commercial enterprise like the Super Bowl, scaling up for that traffic is simply a cost of doing business. But for most of us, we can\u2019t afford to have massive capacity sat there unused for 90% of the year. What you have to do instead is work out how to deal with as much traffic as possible with the modest resources you have.\n\nIn this article I\u2019m going to talk about some of the things we\u2019ve learned about keeping 24 ways running throughout December, whilst not spending a fortune on hosting we don\u2019t need for 11 months of each year. We\u2019ve not always got it right, but we\u2019ve learned a lot along the way.\n\nThe Problem\n\nTo know how to deal with high traffic, you need to have a basic idea of what happens when a request comes into a web server. 24 ways is hosted on a single small virtual dedicated server with a great little hosting company in the UK. We run Apache with PHP and MySQL all on that one server. When a request comes in a new Apache process is started to deal with the request (or assigned if there\u2019s one available not doing anything). Each process takes a bunch of memory, so there\u2019s a finite number of processes that you can run, and therefore a finite number of pages you can serve at once before your server runs out of memory.\n\nWith our budget based on whatever is left over after beer, we need to get best performance we can out of the resources available. As the goal is to serve as many pages as quickly as possible, there are several approaches we can take:\n\n\n\tReducing the amount of memory needed by each Apache process\n\tReducing the amount of time each process is needed\n\tReducing the number of requests made to the server\n\n\nYahoo! have published some information on what they call Exceptional Performance, which is well worth reading, and compliments many of my examples here. The Yahoo! guidelines very much look at things from a user perspective, which is always important.\n\nServer tweaking\n\nIf you\u2019re in the position of being able to change your server configuration (our set-up gives us root access to what is effectively a virtual machine) there are some basic steps you can take to maximise the available memory and reduce the memory footprint. Without getting too boring and technical (whole books have been written on this) there are a couple of things to watch out for.\n\nFirstly, check what processes you have running that you might not need. Every megabyte of memory that you free up might equate to several thousand extra requests being served each day, so take a look at top and see what\u2019s using up your resources. Quite often a machine configured as a web server will have some kind of mail server running by default. If your site doesn\u2019t use mail (ours doesn\u2019t) make sure it\u2019s shut down and not using resources.\n\nSecondly, have a look at your Apache configuration and particularly what modules are loaded. The method for doing this varies between versions of Apache, but again, every module loaded increases the amount of memory that each Apache process requires and therefore limits the number of simultaneous requests you can deal with.\n\nThe final thing to check is that Apache isn\u2019t configured to start more servers than you have memory for. This is usually done by setting the MaxClients directive. When that limit is reached, your site is going to stop responding to further requests. However, if all else goes well that threshold won\u2019t be reached, and if it does it will at least stop the weight of the traffic taking the entire server down to a point where you can\u2019t even log in to sort it out.\n\nThose are the main tidbits I\u2019ve found useful for this site, although it\u2019s worth repeating that entire books have been written on this subject alone.\n\nCaching\n\nAlthough the site is generated with PHP and MySQL, the majority of pages served don\u2019t come from the database. The process of compiling a page on-the-fly involves quite a few trips to the database for content, templates, configuration settings and so on, and so can be slow and require a lot of CPU. Unless a new article or comment is published, the site doesn\u2019t actually change between requests and so it makes sense to generate each page once, save it to a file and then just serve all following requests from that file.\n\nWe use QuickCache (or rather a plugin based on it) for this. The plugin integrates with our publishing system (Textpattern) to make sure the cache is cleared when something on the site changes. A similar plugin called WP-Cache is available for WordPress, but of course this could be done any number of ways, and with any back-end technology.\n\nThe important principal here is to reduce the time it takes to serve a page by compiling the page once and serving that cached result to subsequent visitors. Keep away from your database if you can.\n\nOutsource your feeds\n\nWe get around 36,000 requests for our feed each day. That really only works out at about 7,000 subscribers averaging five-and-a-bit requests a day, but it\u2019s still 36,000 requests we could easily do without. Each request uses resources and particularly during December, all those requests can add up. \n\nThe simple solution here was to switch our feed over to using FeedBurner. We publish the address of the FeedBurner version of our feed here, so those 36,000 requests a day hit FeedBurner\u2019s servers rather than ours. In addition, we get pretty graphs showing how the subscriber-base is building.\n\n\n\nOff-load big files\n\nLarger files like images or downloads pose a problem not in bandwidth, but in the time it takes them to transfer. A typical page request is very quick, a few seconds at the most, resulting in the connection being freed up promptly. Anything that keeps a connection open for a long time is going to start killing performance very quickly.\n\nThis year, we started serving most of the images for articles from a subdomain \u2013 media.24ways.org. Rather than pointing to our own server, this subdomain points to an Amazon S3 account where the files are held. It\u2019s easy to pigeon-hole S3 as merely an online backup solution, and whilst not a fully fledged CDN, S3 works very nicely for serving larger media files. The roughly 20GB of files served this month have cost around $5 in Amazon S3 charges. That\u2019s so affordable it may not be worth even taking the files back off S3 once December has passed.\n\nI found this article on Scalable Media Hosting with Amazon S3 to be really useful in getting started. I upload the files via a Firefox plugin (mentioned in the article) and then edit the ACL to allow public access to the files. The way S3 enables you to point DNS directly at it means that you\u2019re not tied to always using the service, and that it can be transparent to your users.\n\nIf your site uses photographs, consider uploading them to a service like Flickr and serving them directly from there. Many photo sharing sites are happy for you to link to images in this way, but do check the acceptable use policies in case you need to provide a credit or link back.\n\nOff-load small files\n\nYou\u2019ll have noticed the pattern by now \u2013 get rid of as much traffic as possible. When an article has a lot of comments and each of those comments has an avatar along with it, a great many requests are needed to fetch each of those images. In 2006 we started using Gravatar for avatars, but their servers were slow and were holding up page loads. To get around this we started caching the images on our server, but along with that came the burden of furnishing all the image requests.\n\nEarlier this year Gravatar changed hands and is now run by the same team behind WordPress.com. Those guys clearly know what they\u2019re doing when it comes to high performance, so this year we went back to serving avatars directly from them.\n\nIf your site uses avatars, it really makes sense to use a service like Gravatar where your users probably already have an account, and where the image requests are going to be dealt with for you. \n\nKnow what you\u2019re paying for\n\nThe server account we use for 24 ways was opened in November 2005. When we first hit the front page of Digg in December of that year, we upgraded the server with a bit more memory, but other than that we were still running on that 2005 spec for two years. Of course, the world of technology has moved on in those years, prices have dropped and specs have improved. For the same amount we were paying for that 2005 spec server, we could have an account with twice the CPU, memory and disk space.\n\nSo in November of this year I took out a new account and transferred the site from the old server to the new. In that single step we were prepared for dealing with twice the amount of traffic, and because of a special offer at the time I didn\u2019t even have to pay the setup cost on the new server. So it really pays to know what you\u2019re paying for and keep an eye out of ways you can make improvements without needing to spend more money.\n\nFurther steps\n\nThere\u2019s nearly always more that can be done. For example, there are some media files (particularly for older articles) that are not on S3. We also serve our CSS directly and it\u2019s not minified or compressed. But by tackling the big problems first we\u2019ve managed to reduce load on the server and at the same time make sure that the load being placed on the server can be dealt with in the most frugal way.\n\nOver the last 24 days we\u2019ve served up articles to more than 350,000 visitors without breaking a sweat. On a busy day, that\u2019s been nearly 20,000 visitors in just 24 hours. While in the grand scheme of things that\u2019s not a huge amount of traffic, it can be a lot if you\u2019re not prepared for it. However, with a little planning for the peaks you can help ensure that when the traffic arrives you\u2019re ready to capitalise on it.\n\nOf course, people only visit 24 ways for the wealth of knowledge and experience that\u2019s tied up in the articles here. Therefore I\u2019d like to take the opportunity to thank all our authors this year who have given their time as a gift to the community, and to wish you all a very happy Christmas.", "year": "2007", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2007-12-24T00:00:00+00:00", "url": "https://24ways.org/2007/performance-on-a-shoe-string/", "topic": "ux"} {"rowid": 101, "title": "Easing The Path from Design to Development", "contents": "As a web developer, I have the pleasure of working with a lot of different designers. There has been a lot of industry discussion of late about designers and developers, focusing on how different we sometimes are and how the interface between our respective phases of a project (that is to say moving from a design phase into production) can sometimes become a battleground.\n\nI don\u2019t believe it has to be a battleground. It\u2019s actually more like being a dance partner \u2013 our steps are different, but as long as we know our own part and have a little knowledge of our partner\u2019s steps, it all goes together to form a cohesive dance. Albeit with less spandex and fewer sequins (although that may depend on the project in question).\n\nAs the process usually flows from design towards development, it\u2019s most important that designers have a little knowledge of how the site is going to be built. At the specialist web development agency I\u2019m part of, we find that designs that have been well considered from a technical perspective help to keep the project on track and on budget.\n\nBased on that experience, I\u2019ve put together my checklist of things that designers should consider before handing their work over to a developer to build.\n\nLayout\n\nOne rookie mistake made by traditionally trained designers transferring to the web is to forget a web browser is not a fixed medium. Unlike designing a magazine layout or a piece of packaging, there are lots of available options to consider. Should the layout be fluid and resize with the window, or should it be set to a fixed width? If it\u2019s fluid, which parts expand and which not? If it\u2019s fixed, should it sit in the middle of the window or to one side?\n\nIf any part of the layout is going to be flexible (get wider and narrower as required), consider how any graphics are affected. Images don\u2019t usually look good if displayed at anything other that their original size, so should they behave? If a column is going to get wider than it\u2019s shown in the Photoshop comp, it may be necessary to provide separate wider versions of any background images.\n\nText size and content volume\n\nA related issue is considering how the layout behaves with both different sizes of text and different volumes of content. Whilst text zooming rather than text resizing is becoming more commonplace as the default behaviour in browsers, it\u2019s still a fundamentally important principal of web design that we are suggesting and not dictating how something should look. Designs must allow for a little give and take in the text size, and how this affects the design needs to be taken into consideration.\n\nKeep in mind that the same font can display differently in different places and platforms. Something as simple as Times will display wider on a Mac than on Windows. However, the main impact of text resizing is the change in how much vertical space copy takes up. This is particularly important where space is limited by the design (making text bigger causes many more problems than making text smaller). Each element from headings to box-outs to navigation items and buttons needs to be able to expand at least vertically, if not horizontally as well. This may require some thought about how elements on the page may wrap onto new lines, as well as again making sure to provide extended versions of any graphical elements.\n\nSimilarly, it\u2019s rare theses days to know exactly what content you\u2019re working with when a site is designed. Many, if not most sites are designed as a series of templates for some kind of content management system, and so designs cannot be tweaked around any specific item of content. Designs must be able to cope with both much greater and much lesser volumes of content that might be thrown in at the lorem ipsum phase.\n\nParticular things to watch out for are things like headings (how do they wrap onto multiple lines) and any user-generated items like usernames. It can be very easy to forget that whilst you might expect something like a username to be 8-12 characters, if the systems powering your site allow for 255 characters they\u2019ll always be someone who\u2019ll go there. Expect them to do so.\n\nAgain, if your site is content managed or not, consider the possibility that the structure might be expanded in the future. Consider how additional items might be added to each level of navigation. Whilst it\u2019s rarely desirable to make significant changes without revisiting the site\u2019s information architecture more thoroughly, it\u2019s an inevitable fact of life that the structure needs a little bit of flexibility to change over time.\n\nInteractions with and without JavaScript\n\nA great number of sites now make good use of JavaScript to streamline the user interface and make everything just that touch more usable. Remember, though, that any developer worth their salt will start by building the interface without JavaScript, get it all working, and then layer that JavaScript on top. This is to allow for users viewing the site without JavaScript available or enabled in their browser.\n\nDesigners need to consider both states of any feature they\u2019re designing \u2013 how it looks and functions with and without JavaScript. If the feature does something fancy with Ajax, consider how the same can be achieved with basic HTML forms, links and intermediary pages. These all need to be designed, because this is how some of your users will interact with the site.\n\nLogged in and logged out states\n\nWhen designing any type of web application or site that has a membership system \u2013 that is to say users can create an account and log into the site \u2013 the design will need to consider how any element is presented in both logged in and logged out states. For some items there\u2019ll be no difference, whereas for others there may be considerable differences.\n\nShould an item be hidden completely not logged out users? Should it look different in some way? Perhaps it should look the same, but prompt the user to log in when they interact with it. If so, what form should that prompt take on and how does the user progress through the authentication process to arrive back at the task they were originally trying to complete?\n\nCouple logged in and logged out states with the possible absence of JavaScript, and every feature needs to be designed in four different states:\n\n\n\tLogged out with JavaScript available\n\tLogged in with JavaScript available\n\tLogged out without JavaScript available\n\tLogged in without JavaScript available\n\n\nFonts\n\nThere are three main causes of war in this world; religions, politics and fonts. I\u2019ve said publicly before that I believe the responsibility for this falls squarely at the feet of Adobe Photoshop. Photoshop, like a mistress at a brothel, parades a vast array of ropey, yet strangely enticing typefaces past the eyes of weak, lily-livered designers, who can\u2019t help but crumble to their curvy charms.\n\nYet, on the web, we have to be a little more restrained in our choice of typefaces. The purest solution is always to make the best use of the available fonts, but this isn\u2019t always the most desirable solution from a design point of view. There are several technical solutions such as techniques that utilise Flash (like sIFR), dynamically generated images and even canvas in newer browsers. Discuss the best approach with your developer, as every different technique has different trade-offs, and this may impact the design in other ways.\n\nMessaging\n\nAny site that has interactive elements, from a simple contact form through to fully featured online software application, involves some kind of user messaging. By this I mean the error messages when something goes wrong and the success and thank-you messages when something goes right. These typically appear as the result of an interaction, so are easy to forget and miss off a Photoshop comp.\n\nFor every form, consider what gets displayed to the user if they make a mistake or miss something out, and also what gets displayed back when the interaction is successful. What do they see and where do the go next?\n\nWith Ajax interactions, the user doesn\u2019t get any visual feedback of the site waiting for a response from the server unless you design it that way. Consider using a \u2018waiting\u2019 or \u2018in progress\u2019 spinner to give the user some visual feedback of any background processes. How should these look? How do they animate?\n\nSimilarly, also consider the big error pages like a 404. With luck, these won\u2019t often be seen, but it\u2019s at the point that they are when careful design matters the most.\n\nForm fields\n\nDepending on the visual style of your site, the look of a browser\u2019s default form fields and buttons can sometimes jar. It\u2019s understandable that many a designer wants to change the way they look. Depending on the browser in question, various things can be done to style form fields and their buttons (although it\u2019s not as flexible as most would like).\n\nA common request is to replace the default buttons with a graphical button. This is usually achievable in most cases, although it\u2019s not easy to get a consistent result across all browsers \u2013 particularly when it comes to vertical positioning and the space surrounding the button. If the layout is very precise, or if space is at a premium, it\u2019s always best to try and live with the browser\u2019s default form controls.\n\nWhichever way you go, it\u2019s important to remember that in general, each form field should have a label, and each form should have a submit button. If you find that your form breaks either of those rules, you should double check.\n\nPractical tips for handing files over\n\nThere are a couple of basic steps that a design can carry out to make sure that the developer has the best chance of implementing the design exactly as envisioned.\n\nIf working with Photoshop of Fireworks or similar comping tool, it helps to group and label layers to make it easy for a developer to see which need to be turned on and off to get to isolate parts of the page and different states of the design. Also, if you don\u2019t work in the same office as your developer (and so they can\u2019t quickly check with you), provide a PDF of each page and state so that your developer can see how each page should look aside from any confusion with quick layers are switched on or off. These also act as a handy quick reference that can be used without firing up Photoshop (which can kill both productivity and your machine).\n\nFinally, provide a colour reference showing the RGB values of all the key colours used throughout the design. Without this, the developer will end up colour-picking from the comps, and could potentially end up with different colours to those you intended. Remember, for a lot of developers, working in a tool like Photoshop is like presenting a designer with an SSH terminal into a web server. It\u2019s unfamiliar ground and easy to get things wrong. Be the expert of your own domain and help your colleagues out when they\u2019re out of their comfort zone. That goes both ways.\n\nIn conclusion\n\nWhen asked the question of how to smooth hand-over between design and development, almost everyone who has experienced this situation could come up with their own list. This one is mine, based on some of the more common experiences we have at edgeofmyseat.com. So in bullet point form, here\u2019s my checklist for handing a design over.\n\n\n\tIs the layout fixed, or fluid?\n\tDoes each element cope with expanding for larger text and more content?\n\tAre all the graphics large enough to cope with an area expanding?\n\tDoes each interactive element have a state for with and without JavaScript?\n\tDoes each element have a state for logged in and logged out users?\n\tHow are any custom fonts being displayed? (and does the developer have the font to use?)\n\tDoes each interactive element have error and success messages designed?\n\tDo all form fields have a label and each form a submit button?\n\tIs your Photoshop comp document well organised?\n\tHave you provided flat PDFs of each state?\n\tHave you provided a colour reference?\n\tAre we having fun yet?", "year": "2008", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2008-12-01T00:00:00+00:00", "url": "https://24ways.org/2008/easing-the-path-from-design-to-development/", "topic": "process"} {"rowid": 181, "title": "Working With RGBA Colour", "contents": "When Tim and I were discussing the redesign of this site last year, one of the clear goals was to have a graphical style without making the pages heavy with a lot of images. When we launched, a lot of people were surprised that the design wasn\u2019t built with PNGs. Instead we\u2019d used RGBA colour values, which is part of the CSS3 specification.\n\nWhat is RGBA Colour?\n\nWe\u2019re all familiar with specifying colours in CSS using by defining the mix of red, green and blue light required to achieve our tone. This is fine and dandy, but whatever values we specify have one thing in common \u2014 the colours are all solid, flat, and well, a bit boring.\n\n Flat RGB colours\n\nCSS3 introduces a couple of new ways to specify colours, and one of those is RGBA. The A stands for Alpha, which refers to the level of opacity of the colour, or to put it another way, the amount of transparency. This means that we can set not only the red, green and blue values, but also control how much of what\u2019s behind the colour shows through. Like with layers in Photoshop.\n\nDon\u2019t We Have Opacity Already?\n\nThe ability to set the opacity on a colour differs subtly from setting the opacity on an element using the CSS opacity property. Let\u2019s look at an example.\n\nHere we have an H1 with foreground and background colours set against a page with a patterned background.\n\n Heading with no transparency applied\n\nh1 {\n\tcolor: rgb(0, 0, 0);\n\tbackground-color: rgb(255, 255, 255);\n}\n\nBy setting the CSS opacity property, we can adjust the transparency of the entire element and its contents:\n\n Heading with 50% opacity on the element\n\nh1 {\n\tcolor: rgb(0, 0, 0);\n\tbackground-color: rgb(255, 255, 255);\n\topacity: 0.5;\n}\n\nRGBA colour gives us something different \u2013 the ability to control the opacity of the individual colours rather than the entire element. So we can set the opacity on just the background:\n\n 50% opacity on just the background colour\n\nh1 {\n\tcolor: rgb(0, 0, 0);\n\tbackground-color: rgba(255, 255, 255, 0.5);\n}\n\nOr leave the background solid and change the opacity on just the text:\n\n 50% opacity on just the foreground colour\n\nh1 {\n\tcolor: rgba(0, 0, 0, 0.5);\n\tbackground-color: rgb(255, 255, 255);\n}\n\nThe How-To\n\nYou\u2019ll notice that above I\u2019ve been using the rgb() syntax for specifying colours. This is a bit less common than the usual hex codes (like #FFF) but it makes sense when starting to use RGBA. As there\u2019s no way to specify opacity with hex codes, we use rgba() like so:\n\ncolor: rgba(255, 255, 255, 0.5);\n\nJust like rgb() the first three values are red, green and blue. You can specify these 0-255 or 0%-100%. The fourth value is the opacity level from 0 (completely transparent) to 1 (completely opaque).\n\nYou can use this anywhere you\u2019d normally set a colour in CSS \u2014 so it\u2019s good for foregrounds and background, borders, outlines and so on. All the transparency effects on this site\u2019s current design are achieved this way.\n\nSupporting All Browsers\n\nLike a lot of the features we\u2019ll be looking at in this year\u2019s 24 ways, RGBA colour is supported by a lot of the newest browsers, but not the rest. Firefox, Safari, Chrome and Opera browsers all support RGBA, but Internet Explorer does not.\n\nFortunately, due to the robust design of CSS as a language, we can specify RGBA colours for browsers that support it and an alternative for browsers that do not.\n\nFalling back to solid colour\n\nThe simplest technique is to allow the browser to fall back to using a solid colour when opacity isn\u2019t available. The CSS parsing rules specify that any unrecognised value should be ignored. We can make use of this because a browser without RGBA support will treat a colour value specified with rgba() as unrecognised and discard it.\n\nSo if we specify the colour first using rgb() for all browsers, we can then overwrite it with an rgba() colour for browsers that understand RGBA.\n\nh1 {\n\tcolor: rgb(127, 127, 127);\n\tcolor: rgba(0, 0, 0, 0.5);\n}\n\nFalling back to a PNG\n\nIn cases where you\u2019re using transparency on a background-color (although not on borders or text) it\u2019s possible to fall back to using a PNG with alpha channel to get the same effect. This is less flexible than using CSS as you\u2019ll need to create a new PNG for each level of transparency required, but it can be a useful solution.\n\nUsing the same principal as before, we can specify the background in a style that all browsers will understand, and then overwrite it in a way that browsers without RGBA support will ignore.\n\nh1 {\n\tbackground: transparent url(black50.png);\n\tbackground: rgba(0, 0, 0, 0.5) none;\n}\n\nIt\u2019s important to note that this works because we\u2019re using the background shorthand property, enabling us to set both the background colour and background image in a single declaration. It\u2019s this that enables us to rely on the browser ignoring the second declaration when it encounters the unknown rgba() value.\n\nNext Steps\n\nThe really great thing about RGBA colour is that it gives us the ability to create far more graphically rich designs without the need to use images. Not only does that make for faster and lighter pages, but sites which are easier and quicker to build and maintain. CSS values can also be changed in response to user interaction or even manipulated with JavaScript in a way that\u2019s just not so easy using images.\n\n Opacity can be changed on :hover or manipulated with JavaScript\n\ndiv {\n\tcolor: rgba(255, 255, 255, 0.8);\n\tbackground-color: rgba(142, 213, 87, 0.3);\n}\ndiv:hover {\n\tcolor: rgba(255, 255, 255, 1);\n\tbackground-color: rgba(142, 213, 87, 0.6);\n}\n\nClever use of transparency in border colours can help ease the transition between overlay items and the page behind.\n\n Borders can receive the RGBA treatment, too\n\ndiv {\n\tcolor: rgb(0, 0, 0);\n\tbackground-color: rgb(255, 255, 255);\n\tborder: 10px solid rgba(255, 255, 255, 0.3);\n}\n\nIn Conclusion\n\n\nThat\u2019s a brief insight into RGBA colour, what it\u2019s good for and how it can be used whilst providing support for older browsers. With the current lack of support in Internet Explorer, it\u2019s probably not a technique that commercial designs will want to heavily rely on right away \u2013 simply because of the overhead of needing to think about fallback all the time. \n\nIt is, however, a useful tool to have for those smaller, less critical touches that can really help to finesse a design. As browser support becomes more mainstream, you\u2019ll already be familiar and practised with RGBA and ready to go.", "year": "2009", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2009-12-01T00:00:00+00:00", "url": "https://24ways.org/2009/working-with-rgba-colour/", "topic": "code"} {"rowid": 220, "title": "Finding Your Way with Static Maps", "contents": "Since the introduction of the Google Maps service in 2005, online maps have taken off in a way not really possible before the invention of slippy map interaction. Although quickly followed by a plethora of similar services from both commercial and non-commercial parties, Google\u2019s first-mover advantage, and easy-to-use developer API saw Google Maps become pretty much the de facto mapping service.\n\n\n\nIt\u2019s now so easy to add a map to a web page, there\u2019s no reason not to. Dropping an iframe map into your page is as simple as embedding a YouTube video.\n\nBut there\u2019s one crucial drawback to both the solution Google provides for you to drop into your page and the code developers typically implement themselves \u2013 they don\u2019t work without JavaScript.\n\nA bit about JavaScript\n\nBack in October of this year, The Yahoo! Developer Network blog ran some tests to measure how many visitors to the Yahoo! home page didn\u2019t have JavaScript available or enabled in their browser. It\u2019s an interesting test when you consider that the audience for the Yahoo! home page (one of the most visited pages on the web) represents about as mainstream a sample as you\u2019ll find. If there\u2019s any such thing as an \u2018average Web user\u2019 then this is them.\n\nThe results surprised me. It varied from region to region, but at most just two per cent of visitors didn\u2019t have JavaScript running. To be honest, I was expecting it to be higher, but this quote from the article caught my attention:\n\n\n\tWhile the percentage of visitors with JavaScript disabled seems like a low number, keep in mind that small percentages of big numbers are also big numbers.\n\n\nThat\u2019s right, of course, and it got me thinking about what that two per cent means. For many sites, two per cent is the number of visitors using the Opera web browser, using IE6, or using Mobile Safari. \n\nSo, although a small percentage of the total, users without JavaScript can\u2019t just be forgotten about, and catering for them is at the very heart of how the web is supposed to work.\n\nStarting with content in HTML, we layer on presentation with CSS and then enhance interactivity with JavaScript. If anything fails along the way or the network craps out, or a browser just doesn\u2019t support one of the technologies, the user still gets something they can work with. \n\nIt\u2019s progressive enhancement \u2013 also known as doing our jobs properly.\n\nSorry, wasn\u2019t this about maps?\n\nAs I was saying, the default code Google provides, and the example code it gives to developers (which typically just gets followed \u2018as is\u2019) doesn\u2019t account for users without JavaScript. No JavaScript, no content.\n\nWhen adding the ability to publish maps to our small content management system Perch, I didn\u2019t want to provide a solution that only worked with JavaScript. I had to go looking for a way to provide maps without JavaScript, too.\n\nThere\u2019s a simple solution, fortunately, in the form of static map tiles. All the various slippy map services use a JavaScript interface on top of what are basically rendered map image tiles. Dragging the map loads in more image tiles in the direction you want to view. If you\u2019ve used a slippy map on a slow connection, you\u2019ll be familiar with seeing these tiles load in one by one.\n\nThe Static Map API\n\nThe good news is that these tiles (or tiles just like them) can be used as regular images on your site. Google has a Static Map API which not only gives you a handy interface to retrieve a tile for the exact area you need, but also allows you to place pins, and zoom and centre the tile so that the image looks just so. \n\nThis means that you can create a static, non-JavaScript version of your slippy map\u2019s initial (or ideal) state to load into your page as a regular image, and then have the JavaScript map hijack the image and make it slippy.\n\nClearly, that\u2019s not going to be a perfect solution for every map\u2019s requirements. It doesn\u2019t allow for panning, zooming or interrogation without JavaScript. However, for the majority of straightforward map uses online, a static map makes a great alternative for those visitors without JavaScript.\n\n\n\nHere\u2019s the how\n\nRetrieving a static map tile is staggeringly easy \u2013 it\u2019s just a case of forming a URL with the correct arguments and then using that as the src of an image tag.\n\n\"Map\n\nAs you can see, there are a few key options that we pass along to the base URL. All of these should be familiar to anyone who\u2019s worked with the JavaScript API.\n\n\n\tcenter determines the point on which the map is centred. This can be latitude and longitude values, or simply an address which is then geocoded.\n\tzoom sets the zoom level.\n\tsize is the pixel dimensions of the image you require.\n\tmaptype can be roadmap, satellite, terrain or hybrid.\n\tmarkers sets one or more pin locations. Markers can be labelled, have different colours, and so on \u2013 there\u2019s quite a lot of control available.\n\tsensor states whether you are using a sensor to determine the user\u2019s location. When just embedding a map in a web page, set this to false.\n\n\nThere are many options, including plotting paths and setting the image format, which can all be found in the straightforward documentation.\n\nAdding to your page\n\nIf you\u2019ve worked with the JavaScript API, you\u2019ll know that it needs a container element which you inject the map into:\n\n
\n\nAll you need to do is put your static image inside that container:\n\n
\n \n
\n\nAnd then, in your JavaScript, find the image and remove it. For example, with jQuery you\u2019d simply use:\n\n$('#map img').remove();\n\nWhy not use a