{"rowid": 166, "title": "Performance On A Shoe String", "contents": "Back in the summer, I happened to notice the official Wimbledon All England Tennis Club site had jumped to the top of Alexa\u2019s Movers & Shakers list \u2014 a list that tracks sites that have had the biggest upturn or downturn in traffic. The lawn tennis championships were underway, and so traffic had leapt from almost nothing to crazy-busy in a no time at all. \n\nMany sites have similar peaks in traffic, especially when they\u2019re based around scheduled events. No one cares about the site for most of the year, and then all of a sudden \u2013 wham! \u2013 things start getting warm in the data centre. Whilst the thought of chestnuts roasting on an open server has a certain appeal, it\u2019s less attractive if you care about your site being available to visitors. Take a look at this Alexa traffic graph showing traffic patterns for superbowl.com at the beginning of each year, and wimbledon.org in the month of July.\n\nTraffic graph from Alexa.com \n\nWhilst not on the same scale or with such dramatic peaks, we have a similar pattern of traffic here at 24ways.org. Over the last three years we\u2019ve seen a dramatic pick up in traffic over the month of December (as would be expected) and then a much lower, although steady load throughout the year. What we do have, however, is the luxury of knowing when the peaks will be. For a normal site, be that a blog, small scale web app, or even a small corporate site, you often just cannot predict when you might get slashdotted, end up on the front page of Digg or linked to from a similarly high-profile site. You just don\u2019t know when the peaks will be.\n\nIf you\u2019re a big commercial enterprise like the Super Bowl, scaling up for that traffic is simply a cost of doing business. But for most of us, we can\u2019t afford to have massive capacity sat there unused for 90% of the year. What you have to do instead is work out how to deal with as much traffic as possible with the modest resources you have.\n\nIn this article I\u2019m going to talk about some of the things we\u2019ve learned about keeping 24 ways running throughout December, whilst not spending a fortune on hosting we don\u2019t need for 11 months of each year. We\u2019ve not always got it right, but we\u2019ve learned a lot along the way.\n\nThe Problem\n\nTo know how to deal with high traffic, you need to have a basic idea of what happens when a request comes into a web server. 24 ways is hosted on a single small virtual dedicated server with a great little hosting company in the UK. We run Apache with PHP and MySQL all on that one server. When a request comes in a new Apache process is started to deal with the request (or assigned if there\u2019s one available not doing anything). Each process takes a bunch of memory, so there\u2019s a finite number of processes that you can run, and therefore a finite number of pages you can serve at once before your server runs out of memory.\n\nWith our budget based on whatever is left over after beer, we need to get best performance we can out of the resources available. As the goal is to serve as many pages as quickly as possible, there are several approaches we can take:\n\n\n\tReducing the amount of memory needed by each Apache process\n\tReducing the amount of time each process is needed\n\tReducing the number of requests made to the server\n\n\nYahoo! have published some information on what they call Exceptional Performance, which is well worth reading, and compliments many of my examples here. The Yahoo! guidelines very much look at things from a user perspective, which is always important.\n\nServer tweaking\n\nIf you\u2019re in the position of being able to change your server configuration (our set-up gives us root access to what is effectively a virtual machine) there are some basic steps you can take to maximise the available memory and reduce the memory footprint. Without getting too boring and technical (whole books have been written on this) there are a couple of things to watch out for.\n\nFirstly, check what processes you have running that you might not need. Every megabyte of memory that you free up might equate to several thousand extra requests being served each day, so take a look at top and see what\u2019s using up your resources. Quite often a machine configured as a web server will have some kind of mail server running by default. If your site doesn\u2019t use mail (ours doesn\u2019t) make sure it\u2019s shut down and not using resources.\n\nSecondly, have a look at your Apache configuration and particularly what modules are loaded. The method for doing this varies between versions of Apache, but again, every module loaded increases the amount of memory that each Apache process requires and therefore limits the number of simultaneous requests you can deal with.\n\nThe final thing to check is that Apache isn\u2019t configured to start more servers than you have memory for. This is usually done by setting the MaxClients directive. When that limit is reached, your site is going to stop responding to further requests. However, if all else goes well that threshold won\u2019t be reached, and if it does it will at least stop the weight of the traffic taking the entire server down to a point where you can\u2019t even log in to sort it out.\n\nThose are the main tidbits I\u2019ve found useful for this site, although it\u2019s worth repeating that entire books have been written on this subject alone.\n\nCaching\n\nAlthough the site is generated with PHP and MySQL, the majority of pages served don\u2019t come from the database. The process of compiling a page on-the-fly involves quite a few trips to the database for content, templates, configuration settings and so on, and so can be slow and require a lot of CPU. Unless a new article or comment is published, the site doesn\u2019t actually change between requests and so it makes sense to generate each page once, save it to a file and then just serve all following requests from that file.\n\nWe use QuickCache (or rather a plugin based on it) for this. The plugin integrates with our publishing system (Textpattern) to make sure the cache is cleared when something on the site changes. A similar plugin called WP-Cache is available for WordPress, but of course this could be done any number of ways, and with any back-end technology.\n\nThe important principal here is to reduce the time it takes to serve a page by compiling the page once and serving that cached result to subsequent visitors. Keep away from your database if you can.\n\nOutsource your feeds\n\nWe get around 36,000 requests for our feed each day. That really only works out at about 7,000 subscribers averaging five-and-a-bit requests a day, but it\u2019s still 36,000 requests we could easily do without. Each request uses resources and particularly during December, all those requests can add up. \n\nThe simple solution here was to switch our feed over to using FeedBurner. We publish the address of the FeedBurner version of our feed here, so those 36,000 requests a day hit FeedBurner\u2019s servers rather than ours. In addition, we get pretty graphs showing how the subscriber-base is building.\n\n\n\nOff-load big files\n\nLarger files like images or downloads pose a problem not in bandwidth, but in the time it takes them to transfer. A typical page request is very quick, a few seconds at the most, resulting in the connection being freed up promptly. Anything that keeps a connection open for a long time is going to start killing performance very quickly.\n\nThis year, we started serving most of the images for articles from a subdomain \u2013 media.24ways.org. Rather than pointing to our own server, this subdomain points to an Amazon S3 account where the files are held. It\u2019s easy to pigeon-hole S3 as merely an online backup solution, and whilst not a fully fledged CDN, S3 works very nicely for serving larger media files. The roughly 20GB of files served this month have cost around $5 in Amazon S3 charges. That\u2019s so affordable it may not be worth even taking the files back off S3 once December has passed.\n\nI found this article on Scalable Media Hosting with Amazon S3 to be really useful in getting started. I upload the files via a Firefox plugin (mentioned in the article) and then edit the ACL to allow public access to the files. The way S3 enables you to point DNS directly at it means that you\u2019re not tied to always using the service, and that it can be transparent to your users.\n\nIf your site uses photographs, consider uploading them to a service like Flickr and serving them directly from there. Many photo sharing sites are happy for you to link to images in this way, but do check the acceptable use policies in case you need to provide a credit or link back.\n\nOff-load small files\n\nYou\u2019ll have noticed the pattern by now \u2013 get rid of as much traffic as possible. When an article has a lot of comments and each of those comments has an avatar along with it, a great many requests are needed to fetch each of those images. In 2006 we started using Gravatar for avatars, but their servers were slow and were holding up page loads. To get around this we started caching the images on our server, but along with that came the burden of furnishing all the image requests.\n\nEarlier this year Gravatar changed hands and is now run by the same team behind WordPress.com. Those guys clearly know what they\u2019re doing when it comes to high performance, so this year we went back to serving avatars directly from them.\n\nIf your site uses avatars, it really makes sense to use a service like Gravatar where your users probably already have an account, and where the image requests are going to be dealt with for you. \n\nKnow what you\u2019re paying for\n\nThe server account we use for 24 ways was opened in November 2005. When we first hit the front page of Digg in December of that year, we upgraded the server with a bit more memory, but other than that we were still running on that 2005 spec for two years. Of course, the world of technology has moved on in those years, prices have dropped and specs have improved. For the same amount we were paying for that 2005 spec server, we could have an account with twice the CPU, memory and disk space.\n\nSo in November of this year I took out a new account and transferred the site from the old server to the new. In that single step we were prepared for dealing with twice the amount of traffic, and because of a special offer at the time I didn\u2019t even have to pay the setup cost on the new server. So it really pays to know what you\u2019re paying for and keep an eye out of ways you can make improvements without needing to spend more money.\n\nFurther steps\n\nThere\u2019s nearly always more that can be done. For example, there are some media files (particularly for older articles) that are not on S3. We also serve our CSS directly and it\u2019s not minified or compressed. But by tackling the big problems first we\u2019ve managed to reduce load on the server and at the same time make sure that the load being placed on the server can be dealt with in the most frugal way.\n\nOver the last 24 days we\u2019ve served up articles to more than 350,000 visitors without breaking a sweat. On a busy day, that\u2019s been nearly 20,000 visitors in just 24 hours. While in the grand scheme of things that\u2019s not a huge amount of traffic, it can be a lot if you\u2019re not prepared for it. However, with a little planning for the peaks you can help ensure that when the traffic arrives you\u2019re ready to capitalise on it.\n\nOf course, people only visit 24 ways for the wealth of knowledge and experience that\u2019s tied up in the articles here. Therefore I\u2019d like to take the opportunity to thank all our authors this year who have given their time as a gift to the community, and to wish you all a very happy Christmas.", "year": "2007", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2007-12-24T00:00:00+00:00", "url": "https://24ways.org/2007/performance-on-a-shoe-string/", "topic": "ux"} {"rowid": 150, "title": "A Gift Idea For Your Users: Respect, Yo", "contents": "If, indeed, it is the thought that counts, maybe we should pledge to make more thoughtful design decisions. In addition to wowing people who use the Web sites we build with novel features, nuanced aesthetics and the new new thing, maybe we should also thread some subtle things throughout our work that let folks know: hey, I\u2019m feeling ya. We\u2019re simpatico. I hear you loud and clear.\n\nIt\u2019s not just holiday spirit that moves me to talk this way. As good as people are, we need more than the horizon of karma to overcome that invisible demon, inertia. Makers of the Web, respectful design practices aren\u2019t just the right thing, they are good for business. Even if your site is the one and only place to get experience x, y or zed, you don\u2019t rub someone\u2019s face in it. You keep it free flowing, you honor the possible back and forth of a healthy transaction, you are Johnny Appleseed with the humane design cues. You make it clear that you are in it for the long haul.\n\nA peek back:\n\n\n\nThink back to what search (and strategy) was like before Google launched a super clean page with \u201cI\u2019m Feeling Lucky\u201d button. Aggregation was the order of the day (just go back and review all the \u2018strategic alliances\u2019 that were announced daily.) Yet the GOOG comes along with this zen layout (nope, we\u2019re not going to try to make you look at one of our media properties) and a bold, brash, teleport-me-straight-to-the-first-search-result button. It could have been titled \u201cWe\u2019re Feeling Cocky\u201d. These were radical design decisions that reset how people thought about search services. Oh, you mean I can just find what I want and get on with it?\n\nIt\u2019s maybe even more impressive today, after the GOOG has figured out how to monetize attention better than anyone. \u201cI\u2019m Feeling Lucky\u201d is still there. No doubt, it costs the company millions. But by leaving a little money on the table, they keep the basic bargain they started to strike in 1997. We\u2019re going to get you where you want to go as quickly as possible.\n\nWhere are the places we might make the same kind of impact in our work? Here are a few ideas:\n\nRespect People\u2019s Time\n\nAs more services become more integrated with our lives, this will only become more important. How can you make it clear that you respect the time a user has granted you?\n\nUser-Oriented Defaults\n\nDefault design may be the psionic tool in your belt. Unseen, yet pow-er-ful. Look at your defaults. Who are they set up to benefit? Are you depending on users not checking off boxes so you can feel ok about sending them email they really don\u2019t want? Are you depending on users not checking off boxes so you tilt privacy values in ways most beneficial for your SERPs? Are you making it a little too easy for 3rd party applications to run buckwild through your system?\n\nThere\u2019s being right and then there\u2019s being awesome. Design to the spirit of the agreement and not the letter.\n\nSee this?\n\n\n\nMake sure that\u2019s really the experience you think people want. Whenever I see a service that defaults to not opting me in their newsletter just because I bought a t shirt from them, you can be sure that I trust them that much more. And they are likely to see me again.\n\nReduce, Reuse\n\nIt\u2019s likely that people using your service will have data and profile credentials elsewhere. You should really think hard about how you can let them repurpose some of that work within your system. Can you let them reduce the number of logins/passwords they have to manage by supporting OpenID? Can you let them reuse profile information from another service by slurping in (or even subscribing) to hCards? Can you give them a leg up by reusing a friends list they make available to you? (Note: please avoid the anti-pattern of inviting your user to upload all her credential data from 3rd party sites into your system.)\n\nThis is a much larger issue, and if you\u2019d like to get involved, have a look at the wiki, and dive in.\n\nMake it simple to leave\n\nOh, this drives me bonkers. Again, the more simple you make it to increase or decrease involvement in your site, or to just opt-out altogether, the better. This example from Basecamp is instructive:\n\n\n\nAt a glance, I can see what the implications are of choosing a different type of account. I can also move between account levels with one click. Finally, I can cancel the service easily. No hoop jumping. Also, it should be simple for users to take data with them or delete it.\n\nLet Them Have Fun\n\nDon\u2019t overlook opportunities for pleasure. Even the most mundane tasks can be made more enjoyable. Check out one of my favorite pieces of interaction design from this past year:\n\n\n\nHoly knob fiddling, Batman! What a great way to get people to play with preference settings: an equalizer metaphor. Those of a certain age will recall how fun it was to make patterns with your uncle\u2019s stereo EQ. I think this is a delightful way to encourage people to optimize their own experience of the news feed feature. Given the killer nature of this feature, it was important for Facebook to make it easy to fine tune.\n\nI\u2019d also point you to Flickr\u2019s Talk Like A Pirate Day Easter egg as another example of design that delights. What a huge amount of work for a one-off, totally optional way to experience the site. And what fun. And how true to its brand persona. Brill.\n\nAnti-hype\n\nDon\u2019t talk so much. Rather, ship and sample. Release code, tell the right users. See what happens. Make changes. Extend the circle a bit by showing some other folks. Repeat.\n\nThe more you hype coming features, the more you talk about what isn\u2019t yet, the more you build unrealistic expectations. Your genius can\u2019t possibly match our collective dreaming. Disappointment is inevitable. Yet, if you craft the right melody and make it simple for people to hum your tune, it will spread. Give it time. Listen.\n\nSpeak the Language of the Tribe\n\nIt\u2019s respectful to speak in a human way. Not that you have to get all zOMGWTFBBQ!!1 in your messaging. People respond when you speak to them in a way that sounds natural. Natural will, of course, vary according to context. Again, listen in and people will signal the speech that works in that group for those tasks. Reveal those cues in your interface work and you\u2019ll have powerful proof that actual people are working on your Web site.\n\nThis example of Pownce\u2018s gender selector is the kind of thing I\u2019m talking about:\n\n\n\nNow, this doesn\u2019t mean you should mimic the lingo on some cool kidz site. Your service doesn\u2019t need to have a massage when it\u2019s down. Think about what works for you and your tribe. Excellent advice here from Feedburner\u2019s Dick Costolo on finding a voice for your service. Also, Mule Design\u2019s Erika Hall has an excellent talk on improving your word fu.\n\nPass the mic, yo\n\nHere is a crazy idea: you could ask your users what they want. Maybe you could even use this input to figure out what they really want. Tools abound. Comments, wikis, forums, surveys. Embed the sexy new Get Satisfaction widget and have a dynamic FAQ running.\n\nThe point is that you make it clear to people that they have a means of shaping the service with you. And you must showcase in some way that you are listening, evaluating and taking action against some of that user input.\n\nYou get my drift. There are any number of ways we can show respect to those who gift us with their time, data, feedback, attention, evangelism, money. Big things are in the offing. I can feel the love already.", "year": "2007", "author": "Brian Oberkirch", "author_slug": "brianoberkirch", "published": "2007-12-23T00:00:00+00:00", "url": "https://24ways.org/2007/a-gift-idea-for-your-users-respect-yo/", "topic": "ux"} {"rowid": 159, "title": "How Media Studies Can Massage Your Message", "contents": "A young web designer once told his teacher \u2018just get to the meat already.\u2019 He was frustrated with what he called the \u2018history lessons and name-dropping\u2019 aspects of his formal college course. He just wanted to learn how to build Web sites, not examine the reasons why.\n\nTechnique and theory are both integrated and necessary portions of a strong education. The student\u2019s perspective has direct value, but also holds a distinct sorrow: Knowing the how without the why creates a serious problem. Without these surrounding insights we cannot tap into the influence of the history and evolved knowledge that came before. We cannot properly analyze, criticize, evaluate and innovate beyond the scope of technique.\n\nHistory holds the key to transcending former mistakes. Philosophy encourages us to look at different points of view. Studying media and social history empowers us as Web workers by bringing together myriad aspects of humanity in direct relation to the environment of society and technology. Having an understanding of media, communities, communication arts as well as logic, language and computer savvy are all core skills of the best of web designers in today\u2019s medium.\n\nControlling the Message\n\n\n\t\u2018The computer can\u2019t tell you the emotional story. It can give you the exact mathematical design, but what\u2019s missing is the eyebrows.\u2019 \u2013 Frank Zappa\n\n\nMedia is meant to express an idea. The great media theorist Marshall McLuhan suggests that not only is media interesting because it\u2019s about the expression of ideas, but that the media itself actually shapes the way a given idea is perceived. This is what McLuhan meant when he uttered those famous words: \u2018The medium is the message.\u2019\n\nIf instead of actually serving a steak to a vegetarian friend, what might a painting of the steak mean instead? Or a sculpture of a cow? Depending upon the form of media in question, the message is altered.\n\nFigure 1\n\nMust we know the history of cows to appreciate the steak on our plate? Perhaps not, but if we begin to examine how that meat came to be on the plate, the social, cultural and ideological associations of that cow, we begin to understand the complexity of both medium and message. A piece of steak on my plate makes me happy. A vegetarian friend from India might disagree and even find that that serving her a steak was very insensitive.\n\nTakeaway: Getting the message right involves understanding that message in order to direct it to your audience accordingly.\n\nA Separate Piece\n\nIf we revisit the student who only wants technique, while he might become extremely adept at the rendering of an idea, without an understanding of the medium, how is he to have greater control over how that idea is perceived? Ultimately, his creativity is limited and his perspective narrowed, and the teacher has done her student a disservice without challenging him, particularly in a scholastic environment, to think in liberal, creative and ultimately innovative terms.\n\nFor many years, web pundits including myself have promoted the idea of separation as a core concept within creating effective front-end media for the Web. By this, we\u2019ve meant literal separation of the technologies and documents: Markup for content; CSS for presentation; DOM Scripting for behavior. While the message of separation was an important part of understanding and teaching best practices for manageable, scalable sites, that separation is really just a separation of pieces, not of entire disciplines.\n\nFor in fact, the medium of the Web is an integrated one. That means each part of the desired message must be supported by the media silos within a given site. The visual designer must study the color, space, shape and placement of visual objects (including type) into a meaningful expression. The underlying markup is ideally written as semantically as possible, promote the meaning of the content it describes. Any interaction and functionality must make the process of the medium support, not take away from, the meaning of the site or Web application. \n\nExamination: The Glass Bead Game\n\nFigure 2\n\nFigure 2 shows two screenshots of CoreWave\u2019s historic \u2018Glass Bead Game.\u2019 Fashioned after Herman Hesse\u2019s novel of the same name, the game was an exploration of how ideas are connected to other ideas via multiple forms of media. It was created for the Web in 1996 using server-side randomization with .htmlx files in order to allow players to see how random associations are in fact not random at all.\n\nTakeaway: We can use the medium itself to explore creative ideas, to link us from one idea to the next, and to help us better express those ideas to our audiences.\n\nComputers and Human Interaction\n\nSince our medium involves computers and human interaction, it does us well to look to foundations of computers and reason. Not long ago I was chatting with Jared Spool on IM about this and that, and he asked me \u2018So how do you feel about that?\u2019 This caused me no end of laughter and I instantly quipped back \u2018It\u2019s okay by me ELIZA.\u2019 We both enjoyed the joke, but then I tried to share it with another dare I say younger colleague, and the reference was lost.\n\nRaise your hand if you got the reference! Some of you will, but many people who come to the Web medium do not get the benefit of such historical references because we are not formally educated in them. Joseph Weizenbaum created the ELIZA program, which emulates a Rogerian Therapist, in 1966. It was an early study of computers and natural human language. I was a little over 2 years old, how about you?\n\nConversation with Eliza\n\nThere are fortunately a number of ELIZA emulators on the Web. I found one at http://www.chayden.net/eliza/Eliza.html that actually contains the source code (in Java) that makes up the ELIZA script.\n\nFigure 3 shows a screen shot of the interaction. ELIZA first welcomes me, says \u2018Hello, How do you do. Please state your problem\u2019 and we continue in a short loop of conversation, the computer using cues from my answers to create new questions and leading fragments of conversation.\n\nFigure 3\n\nAlbeit a very limited demonstration of how humans could interact with a computer in 1966, it\u2019s amusing to play with now and compare it to something as richly interactive as the Microsoft Surface (Figure 4). Here, we see clear Lucite blocks that display projected video. Each side of the block has a different view of the video, so not only does one have to match up the images as they are moving, but do so in the proper directionality.\n\nFigure 4\n\nTakeway: the better we know our environment, the more we can alter it to emulate, expand and even supersede our message.\n\nLeveraging Holiday Cheer\n\nSince most of us at least have a few days off for the holidays now that Christmas is upon us, now\u2019s a perfect time to reflect on ones\u2019 environment and examine the messages within it. Convince your spouse to find you a few audio books for stocking stuffers. Find interactive games to play with your kids and observe them, and yourself, during the interaction. Pour a nice egg-nog and sit down with a copy of Marshall McLuhan\u2019s \u2018The Medium is the Massage.\u2019 Leverage that holiday cheer and here\u2019s to a prosperous, interactive new year.", "year": "2007", "author": "Molly Holzschlag", "author_slug": "mollyholzschlag", "published": "2007-12-22T00:00:00+00:00", "url": "https://24ways.org/2007/how-media-studies-can-massage-your-message/", "topic": "ux"} {"rowid": 156, "title": "Mobile 2.0", "contents": "Thinking 2.0\n\nAs web geeks, we have a thick skin towards jargon. We all know that \u201cWeb 2.0\u201d has been done to death. At Blue Flavor we even have a jargon bucket to penalize those who utter such painfully overused jargon with a cash deposit. But Web 2.0 is a term that has lodged itself into the conscience of the masses. This is actually a good thing.\n\nThe 2.0 suffix was able to succinctly summarize all that was wrong with the Web during the dot-com era as well as the next evolution of an evolving media. While the core technologies actually stayed basically the same, the principles, concepts, interactions and contexts were radically different.\n\nWith that in mind, this Christmas I want to introduce to you the concept of Mobile 2.0. While not exactly a new concept in the mobile community, it is relatively unknown in the web community. And since the foundation of Mobile 2.0 is the web, I figured it was about time for you to get to know each other.\n\nIt\u2019s the Carriers\u2019 world. We just live in it.\n\nBefore getting into Mobile 2.0, I thought first I should introduce you to its older brother. You know the kind, the kid with emotional problems that likes to beat up on you and your friends for absolutely no reason. That is the mobile of today.\n\nThe mobile ecosystem is a very complicated space often and incorrectly compared to the Web. If the Web was a freewheeling hippie \u2014 believing in freedom of information and the unity of man through communities \u2014 then Mobile is the cutthroat capitalist \u2014 out to pillage and plunder for the sake of the almighty dollar. Where the Web is relatively easy to publish to and ultimately make a buck, Mobile is wrought with layers of complexity, politics and obstacles. \n\nI can think of no better way to summarize these challenges than the testimony of Jason Devitt to the United States Congress in what is now being referred to as the \u201ciPhone Hearing.\u201d Jason is the co-founder and CEO of SkyDeck a new wireless startup and former CEO of Vindigo an early pioneer in mobile content.\n\n\n\nAs Jason points out, the mobile ecosystem is a closed door environment controlled by the carriers, forcing the independent publisher to compete or succumb to the will of corporate behemoths.\n\nBut that is all about to change.\n\nIntroducing Mobile 2.0\n\nMobile 2.0 is term used by the mobile community to describe the current revolution happening in mobile. It describes the convergence of mobile and web services, adding portability, ubiquitous connectivity and location-aware services to add physical context to information found on the Web.\n\nIt\u2019s an important term that looks toward the future. Allowing us to imagine the possibilities that mobile technology has long promised but has yet to deliver. It imagines a world where developers can publish mobile content without the current constraints of the mobile ecosystem.\n\nLike the transition from Web 1.0 to 2.0, it signifies the shift away from corporate or brand-centered experiences to user-centered experiences. A focus on richer interactions, driven by user goals. Moving away from proprietary technologies to more open and standard ones, more akin to the Web. And most importantly (from our perspective as web geeks) a shift away from kludgy one-off mobile applications toward using the Web as a platform for content and services.\n\nThis means the world of the Web and the world of Mobile are coming together faster than you can say ARPU (Average Revenue Per User, a staple mobile term to you webbies). And this couldn\u2019t come at a better time. The importance of understanding and addressing user context is quickly becoming a crucial consideration to every interactive experience as the number of ways we access information on the Web increases.\n\nMobile enables the power of the Web, the collective information of millions of people, inherit payment channels and access to just about every other mass media to literally be overlaid on top of the physical world, in context to the person viewing it. \n\nAnyone who can\u2019t imagine how the influence of mobile technology can\u2019t transform how we perform even the simplest of daily tasks needs to get away from their desktop and see the new evolution of information.\n\nThe Instigators\n\nBut what will make Mobile 2.0 move from idillic concept to a hardened market reality in 2008 will be four key technologies. Its my guess that you know each them already.\n\n1. Opera\n\nOpera is like the little train that could. They have been a driving force on moving the Web as we know it on to mobile handsets. Opera technology has proven itself to be highly adaptable, finding itself preloaded on over 40 million handsets, available on televisions sets through Nintendo Wii or via the Nintendo DS.\n\n2. WebKit\n\nMany were surprised when Apple chose to use KHTML instead of Gecko (the guts of Firefox) to power their Safari rendering engine. But WebKit has quickly evolved to be a powerful and flexible browser in the mobile context. WebKit has been in Nokia smartphones for a few years now, is the technology behind Mobile Safari in the iPhone and the iPod Touch and is the default web technology in Google\u2019s open mobile platform effort, Android.\n\n3. The iPhone\n\nThe iPhone has finally brought the concepts and principles of Mobile 2.0 into the forefront of consumers minds and therefore developers\u2019 minds as well. Over 500 web applications have been written specifically for the iPhone since its launch. It\u2019s completely unheard of to see so many applications built for the mobile context in such a short period of time.\n\n4. CSS & Javascript\n\nWeb 2.0 could not exist without the rich interactions offered by CSS and Javascript, and Mobile 2.0 is no different. CSS and Javascript support across multiple phones historically has been, well\u2026 to put it positively\u2026 utter crap.\n\nJavascript finally allows developers to create interesting interactions that support user goals and the mobile context. Specially, AJAX allows us to finally shed the days of bloated Java applications and focus on portable and flexible web applications. While CSS \u2014 namely CSS3 \u2014 allows us to create designs that are as beautiful as they are economical with bandwidth and load times.\n\nWith Leaflets, a collection of iPhone optimized web apps we created, we heavily relied on CSS3 to cache and reuse design elements over and over, minimizing download times while providing an elegant and user-centered design.\n\n\n\nIn Conclusion\n\nIt is the combination of all these instigators that is significantly decreasing the bar to mobile publishing. The market as Jason Devitt describes it, will begin to fade into the background. And maybe the world of mobile will finally start looking more like the Web that we all know and love.\n\nSo after the merriment and celebration of the holiday is over and you look toward the new year to refresh and renew, I hope that you take a seriously consider the mobile medium. \n\nBy this time next year, it is predicted that one-third of humanity will be using mobile devices to access the Web.", "year": "2007", "author": "Brian Fling", "author_slug": "brianfling", "published": "2007-12-21T00:00:00+00:00", "url": "https://24ways.org/2007/mobile-2-0/", "topic": "business"} {"rowid": 154, "title": "Diagnostic Styling", "contents": "We\u2019re all used to using CSS to make our designs live and breathe, but there\u2019s another way to use CSS: to find out where our markup might be choking on missing accessibility features, targetless links, and just plain missing content. \n\nNote: the techniques discussed here mostly work in Firefox, Safari, and Opera, but not Internet Explorer. I\u2019ll explain why that\u2019s not really a problem near the end of the article \u2014 and no, the reason is not \u201ceveryone should just ignore IE anyway\u201d.\n\nBasic Diagnostics\n\nTo pick a simple example, suppose you want to call out all holdover font and center elements in a site. Simple: you just add the following to your styles.\n\nfont, center {outline: 5px solid red;}\n\nYou could take it further and add in a nice lime background or some such, but big thick red outlines should suffice. Now you\u2019ll be able to see the offenders wherever as you move through the site. (Of course, if you do this on your public server, everyone else will see the outlines too. So this is probably best done on a development server or local copy of the site.)\n\nNot everyone may be familiar with outlines, which were introduced in CSS2, so a word on those before we move on. Outlines are much like borders, except outlines don\u2019t affect layout. Eh? Here\u2019s a comparison.\n\n\n\nOn the left, you have a border. On the right, an outline. The border takes up layout space, pushing other content around and generally being a nuisance. The outline, on the other hand, just draws into quietly into place. In most current browsers, it will overdraw any content already onscreen, and will be overdrawn by any content placed later \u2014 which is why it overlaps the images above it, and is overlapped by those below it.\n\nOkay, so we can outline deprecated elements like font and center. Is that all? Oh no.\n\nAttribution\n\nLet\u2019s suppose you also want to find any instances of inline style \u2014 that is, use of the style attribute on elements in the markup. This is generally discouraged (outside of HTML e-mails, which I\u2019m not going to get anywhere near), as it\u2019s just another side of the same coin of using font: baking the presentation into the document structure instead of putting it somewhere more manageable. So:\n\n*[style], font, center {outline: 5px solid red;}\n\nAdding that attribute selector to the rule\u2019s grouped selector means that we\u2019ll now be outlining any element with a style attribute.\n\nThere\u2019s a lot more that attribute selectors will let use diagnose. For example, we can highlight any images that have empty alt or title text.\n\nimg[alt=\"\"] {border: 3px dotted red;}\nimg[title=\"\"] {outline: 3px dotted fuchsia;}\n\nNow, you may wonder why one of these rules calls for a border, and the other for an outline. That\u2019s because I want them to \u201cadd together\u201d \u2014 that is, if I have an image which possesses both alt and title, and the values of both are empty, then I want it to be doubly marked.\n\n\n\nSee how the middle image there has both red and fuchsia dots running around it? (And am I the only one who sorely misses the actual circular dots drawn by IE5/Mac?) That\u2019s due to its markup, which we can see here in a fragment showing the whole table row.\n\n\nempty title\n\n\"\"\n\"comical\"\n\n\nRight, that\u2019s all well and good, but it misses a rather more serious situation: the selector img[alt=\"\"] won\u2019t match an img element that doesn\u2019t even have an alt attribute. How to tackle this problem?\n\nNot a Problem\n\nWell, if you want to select something based on a negative, you need a negative selector.\n\nimg:not([alt]) {border: 5px solid red;}\n\nThis is really quite a break from the rest of CSS selection, which is all positive: \u201cselect anything that has these characteristics\u201d. With :not(), we have the ability to say (in supporting browsers) \u201cselect anything that hasn\u2019t these characteristics\u201d. In the above example, only img elements that do not have an alt attribute will be selected. So we expand our list of image-related rules to read:\n\nimg[alt=\"\"] {border: 3px dotted red;}\nimg[title=\"\"] {outline: 3px dotted fuchsia;}\nimg:not([alt]) {border: 5px solid red;}\nimg:not([title]) {outline: 5px solid fuchsia;}\n\nWith the following results:\n\n\n\nWe could expand this general idea to pick up tables who lack a summary, or have an empty summary attribute.\n\ntable[summary=\"\"] {outline: 3px dotted red;}\ntable:not([summary]) {outline: 5px solid red;}\n\nWhen it comes to selecting header cells that lack the proper scope, however, we have a trickier situation. Finding headers with no scope attribute is easy enough, but what about those that have a scope attribute with an incorrect value?\n\nIn this case, we actually need to pull an on-off maneuver. This has us setting all th elements to have a highlight style, and then turn it off for the elements that meet our criteria.\n\nth {border: 2px solid red;}\nth[scope=\"col\"], th[scope=\"row\"] {border: none;}\n\nThis was necessary because of the way CSS selectors work. For example, consider this:\n\nth:not([scope=\"col\"]), th:not([scope=\"row\"]) {border: 2px solid red;}\n\nThat would select\u2026all th elements, regardless of their attrributes. That\u2019s because every th element doesn\u2019t have a scope of col, doesn\u2019t have a scope of row, or doesn\u2019t have either. There\u2019s no escaping this selector o\u2019 doom!\n\nThis limitation arises because :not() is limited to containing a single \u201cthing\u201d within its parentheses. You can\u2019t, for example, say \u201cselect all elements except those that are images which descend from list items\u201d. Reportedly, this limitation was imposed to make browser implementation of :not() easier.\n\nStill, we can make good use of :not() in the service of further diagnosing. Calling out links in trouble is a breeze:\n\na[href]:not([title]) {border: 5px solid red;}\na[title=\"\"] {outline: 3px dotted red;}\na[href=\"#\"] {background: lime;}\na[href=\"\"] {background: fuchsia;}\n\n\n\nHere we have a set that will call our attention to links missing title information, as well as links that have no valid target, whether through a missing URL or a JavaScript-driven page where there are no link fallbacks in the case of missing or disabled JavaScript (href=\"#\").\n\nAnd What About IE?\n\nAs I said at the beginning, much of what I covered here doesn\u2019t work in Internet Explorer, most particularly :not() and outline. (Oh, so basically everything? -Ed.)\n\nI can\u2019t do much about the latter. For the former, however, it\u2019s possible to hack your way around the problem by doing some layered on-off stuff. For example, for images, you replace the previously-shown rules with the following:\n\nimg {border: 5px solid red;}\nimg[alt][title] {border-width: 0;}\nimg[alt] {border-color: fuchsia;}\nimg[alt], img[title] {border-style: double;}\nimg[alt=\"\"][title],\nimg[alt][title=\"\"] {border-width: 3px;}\nimg[alt=\"\"][title=\"\"] {border-style: dotted;}\n\nIt won\u2019t have exactly the same set of effects, given the inability to use both borders and outlines, but will still highlight troublesome images.\n\n\n\nIt\u2019s also the case that IE6 and earlier lack support for even attribute selectors, whereas IE7 added pretty much all the attribute selector types there are, so the previous code block won\u2019t have any effect previous to IE7.\n\nIn a broader sense, though, these kinds of styles probably aren\u2019t going to be used in the wild, as it were. Diagnostic styles are something only you see as you work on a site, so you can make sure to use a browser that supports outlines and :not() when you\u2019re diagnosing. The fact that IE users won\u2019t see these styles is irrelevant since users of any browser probably won\u2019t be seeing these styles.\n\nPersonally, I always develop in Firefox anyway, thanks to its ability to become a full-featured IDE through the addition of extensions like Firebug and the Web Developer Toolbar.\n\n\nYeah, About That\u2026\n\nIt\u2019s true that much of what I describe in this article is available in the WDT. I feel there are two advantages to writing your own set of diagnostic styles.\n\n\n\tWhen you write your own styles, you can define exactly what the visual results will be, and how they will interact. The WDT doesn\u2019t let you make its outlines thicker or change their colors.\n\tYou can combine a bunch of diagnostics into a single set of rules and add it to your site\u2019s style sheet during the diagnostic portion, thus ensuring they persist as you surf around. This can be done in the WDT, but it isn\u2019t as easy (and, at least for me, not as reliable).\n\n\nIt\u2019s also true that a markup validator will catch many of the errors I mentioned, such as missing alt and summary attributes. For some, that\u2019s sufficient. But it won\u2019t catch everything diagnostic styles can, like empty alt values or untargeted links, which are perfectly valid, syntactically speaking.\n\n\nDiagnosis Complete?\n\nI hope this has been a fun look at the concept of diagnostic styling as well as a quick introduction into possibly new concepts like :not() and outlines. This isn\u2019t all there is to say, of course: there is plenty more that could be added to a diagnostic style sheet. And everyone\u2019s diagnostics will be different, tuned to meet each person\u2019s unique situation.\n\nMostly, though, I hope this small exploration triggers some creative thinking about the use of CSS to do more than just lay out pages and colorize text. Given the familiarity we acquire with CSS, it only makes sense to use it wherever it might be useful, and setting up visible diagnostic flags is just one more place for it to help us.", "year": "2007", "author": "Eric Meyer", "author_slug": "ericmeyer", "published": "2007-12-20T00:00:00+00:00", "url": "https://24ways.org/2007/diagnostic-styling/", "topic": "process"} {"rowid": 147, "title": "Christmas Is In The AIR", "contents": "That\u2019s right, Christmas is coming up fast and there\u2019s plenty of things to do. Get the tree and lights up, get the turkey, buy presents and who know what else. And what about Santa? He\u2019s got a list. I\u2019m pretty sure he\u2019s checking it twice.\n\nSure, we could use an existing list making web site or even a desktop widget. But we\u2019re geeks! What\u2019s the fun in that? Let\u2019s build our own to-do list application and do it with Adobe AIR!\n\nWhat\u2019s Adobe AIR?\n\nAdobe AIR, formerly codenamed Apollo, is a runtime environment that runs on both Windows and OSX (with Linux support to follow). This runtime environment lets you build desktop applications using Adobe technologies like Flash and Flex. Oh, and HTML. That\u2019s right, you web standards lovin\u2019 maniac. You can build desktop applications that can run cross-platform using the trio of technologies, HTML, CSS and JavaScript.\n\nIf you\u2019ve tried developing with AIR before, you\u2019ll need to get re-familiarized with the latest beta release as many things have changed since the last one (such as the API and restrictions within the sandbox.)\n\nTo get started\n\nTo get started in building an AIR application, you\u2019ll need two basic things:\n\n\n\tThe AIR runtime. The runtime is needed to run any AIR-based application.\n\tThe SDK. The software development kit gives you all the pieces to test your application. Unzip the SDK into any folder you wish.\n\n\nYou\u2019ll also want to get your hands on the JavaScript API documentation which you\u2019ll no doubt find yourself getting into before too long. (You can download it, too.)\n\nAlso of interest, some development environments have support for AIR built right in. Aptana doesn\u2019t have support for beta 3 yet but I suspect it\u2019ll be available shortly.\n\nWithin the SDK, there are two main tools that we\u2019ll use: one to test the application (ADL) and another to build a distributable package of our application (ADT). I\u2019ll get into this some more when we get to that stage of development.\n\nBuilding our To-do list application\n\nThe first step to building an application within AIR is to create an XML file that defines our default application settings. I call mine application.xml, mostly because Aptana does that by default when creating a new AIR project. It makes sense though and I\u2019ve stuck with it. Included in the templates folder of the SDK is an example XML file that you can use.\n\nThe first key part to this after specifying things like the application ID, version, and filename, is to specify what the default content should be within the content tags. Enter in the name of the HTML file you wish to load. Within this HTML file will be our application.\n\nui.html\n\nCreate a new HTML document and name it ui.html and place it in the same directory as the application.xml file. The first thing you\u2019ll want to do is copy over the AIRAliases.js file from the frameworks folder of the SDK and add a link to it within your HTML document.\n\n\n\nThe aliases create shorthand links to all of the Flash-based APIs.\n\nNow is probably a good time to explain how to debug your application.\n\nDebugging our application\n\nSo, with our XML file created and HTML file started, let\u2019s try testing our \u2018application\u2019. We\u2019ll need the ADL application located in BIN folder of the SDK and tell it to run the application.xml file.\n\n/path/to/adl /path/to/application.xml\n\nYou can also just drag the XML file onto ADL and it\u2019ll accomplish the same thing. If you just did that and noticed that your blank application didn\u2019t load, you\u2019d be correct. It\u2019s running but isn\u2019t visible. Which at this point means you\u2019ll have to shut down the ADL process. Sorry about that!\n\nChanging the visibility\n\nYou have two ways to make your application visible. You can do it automatically by setting the placing true in the visible tag within the application.xml file.\n\ntrue\n\nThe other way is to do it programmatically from within your application. You\u2019d want to do it this way if you had other startup tasks to perform before showing the interface. To turn the UI on programmatically, simple set the visible property of nativeWindow to true.\n\n\n\nSandbox Security\n\nNow that we have an application that we can see when we start it, it\u2019s time to build the to-do list application. In doing so, you\u2019d probably think that using a JavaScript library is a really good idea \u2014 and it can be but there are some limitations within AIR that have to be considered.\n\nAn HTML document, by default, runs within the application sandbox. You have full access to the AIR APIs but once the onload event of the window has fired, you\u2019ll have a limited ability to make use of eval and other dynamic script injection approaches. This limits the ability of external sources from gaining access to everything the AIR API offers, such as database and local file system access. You\u2019ll still be able to make use of eval for evaluating JSON responses, which is probably the most important if you wish to consume JSON-based services.\n\nIf you wish to create a greater wall of security between AIR and your HTML document loading in external resources, you can create a child sandbox. We won\u2019t need to worry about it for our application so I won\u2019t go any further into it but definitely keep this in mind.\n\nFinally, our application\n\nGetting tired of all this preamble? Let\u2019s actually build our to-do list application. I\u2019ll use jQuery because it\u2019s small and should suit our needs nicely. Let\u2019s begin with some structure:\n\n\n\t\n\t\n\t\n\n\nNow we need to wire up that button to actually add a new item to our to-do list.\n\n\n\nAnd just like that, we\u2019ve got a to-do list! That\u2019s it! Just never close your application and you\u2019ll remember everything. Okay, that\u2019s not very practical. You need to have some way of storing your to-do items until the next time you open up the application.\n\nStoring Data\n\nYou\u2019ve essentially got 4 different ways that you can store data:\n\n\n\tUsing the local database. AIR comes with SQLLite built in. That means you can create tables and insert, update and select data from that database just like on a web server.\n\tUsing the file system. You can also create files on the local machine. You have access to a few folders on the local system such as the documents folder and the desktop.\n\tUsing EcryptedLocalStore. I like using the EcryptedLocalStore because it allows you to easily save key/value pairs and have that information encrypted. All this within just a couple lines of code.\n\tSending the data to a remote API. Our to-do list could sync up with Remember the Milk, for example.\n\n\nTo demonstrate some persistence, we\u2019ll use the file system to store our files. In addition, we\u2019ll let the user specify where the file should be saved. This way, we can create multiple to-do lists, keeping them separate and organized.\n\nThe application is now broken down into 4 basic tasks:\n\n\n\tLoad data from the file system.\n\tPerform any interface bindings.\n\tManage creating and deleting items from the list.\n\tSave any changes to the list back to the file system.\n\n\nLoading in data from the file system\n\nWhen the application starts up, we\u2019ll prompt the user to select a file or specify a new to-do list. Within AIR, there are 3 main file objects: File, FileMode, and FileStream. File handles file and path names, FileMode is used as a parameter for the FileStream to specify whether the file should be read-only or for write access. The FileStream object handles all the read/write activity.\n\nThe File object has a number of shortcuts to default paths like the documents folder, the desktop, or even the application store. In this case, we\u2019ll specify the documents folder as the default location and then use the browseForSave method to prompt the user to specify a new or existing file. If the user specifies an existing file, they\u2019ll be asked whether they want to overwrite it.\n\nvar store = air.File.documentsDirectory;\nvar fileStream = new air.FileStream();\nstore.browseForSave(\"Choose To-do List\");\n\nThen we add an event listener for when the user has selected a file. When the file is selected, we check to see if the file exists and if it does, read in the contents, splitting the file on new lines and creating our list items within the interface.\n\nstore.addEventListener(air.Event.SELECT, fileSelected);\nfunction fileSelected()\n{\n\tair.trace(store.nativePath);\n\t// load in any stored data\n\tvar byteData = new air.ByteArray();\n\tif(store.exists)\n\t{\n\t\tfileStream.open(store, air.FileMode.READ);\n\t\tfileStream.readBytes(byteData, 0, store.size);\n\t\tfileStream.close();\n\n\t\tif(byteData.length > 0)\n\t\t{\n\t\t\tvar s = byteData.readUTFBytes(byteData.length);\n\t\t\toldlist = s.split(\u201c\\r\\n\u201d);\n\n\t\t\t// create todolist items\n\t\t\tfor(var i=0; i < oldlist.length; i++)\n\t\t\t{\n\t\t\t\tcreateItem(oldlist[i], (new Date()).getTime() + i );\n\t\t\t}\n\t\t}\n\t}\n}\n\nPerform Interface Bindings\n\nThis is similar to before where we set the click event on the Add button but we\u2019ve moved the code to save the list into a separate function.\n\n$('#add').click(function(){\n\t\tvar t = $('#text').val();\n\t\tif(t){\n\t\t\t// create an ID using the time\n\t\t\tcreateItem(t, (new Date()).getTime() );\n\t\t}\n})\n\nManage creating and deleting items from the list\n\nThe list management is now in its own function, similar to before but with some extra information to identify list items and with calls to save our list after each change.\n\nfunction createItem(t, id)\n{\n\tif(t.length == 0) return;\n\t// add it to the todo list\n\ttodolist[id] = t;\n\t// use DOM methods to create the new list item\n\tvar li = document.createElement('li');\n\t// the extra space at the end creates a buffer between the text\n\t// and the delete link we're about to add\n\tli.appendChild(document.createTextNode(t + ' '));\n\t// create the delete link\n\tvar del = document.createElement('a');\n\t// this makes it a true link. I feel dirty doing this.\n\tdel.setAttribute('href', '#');\n\tdel.addEventListener('click', function(evt){\n\t\tvar id = this.id.substr(1);\n\t\tdelete todolist[id]; // remove the item from the list\n\t\tthis.parentNode.parentNode.removeChild(this.parentNode);\n\t\tsaveList();\n\t});\n\tdel.appendChild(document.createTextNode('[del]'));\n\tdel.id = 'd' + id;\n\tli.appendChild(del);\n\t// append everything to the list\n\t$('#list').append(li);\n\t//reset the text box\n\t$('#text').val('');\n\tsaveList();\n}\n\nSave changes to the file system\n\nAny time a change is made to the list, we update the file. The file will always reflect the current state of the list and we\u2019ll never have to click a save button. It just iterates through the list, adding a new line to each one.\n\nfunction saveList(){\n\tif(store.isDirectory) return;\n\tvar packet = '';\n\tfor(var i in todolist)\n\t{\n\t\tpacket += todolist[i] + '\\r\\n';\n\t}\n\tvar bytes = new air.ByteArray();\n\tbytes.writeUTFBytes(packet);\n\tfileStream.open(store, air.FileMode.WRITE);\n\tfileStream.writeBytes(bytes, 0, bytes.length);\n\tfileStream.close();\n}\n\nOne important thing to mention here is that we check if the store is a directory first. The reason we do this goes back to our browseForSave call. If the user cancels the dialog without selecting a file first, then the store points to the documentsDirectory that we set it to initially. Since we haven\u2019t specified a file, there\u2019s no place to save the list.\n\nHopefully by this point, you\u2019ve been thinking of some cool ways to pimp out your list. Now we need to package this up so that we can let other people use it, too.\n\nCreating a Package\n\nNow that we\u2019ve created our application, we need to package it up so that we can distribute it. This is a two step process. The first step is to create a code signing certificate (or you can pay for one from Thawte which will help authenticate you as an AIR application developer).\n\nTo create a self-signed certificate, run the following command. This will create a PFX file that you\u2019ll use to sign your application.\n\nadt -certificate -cn todo24ways 1024-RSA todo24ways.pfx mypassword\n\nAfter you\u2019ve done that, you\u2019ll need to create the package with the certificate\n\nadt -package -storetype pkcs12 -keystore todo24ways.pfx todo24ways.air application.xml .\n\nThe important part to mention here is the period at the end of the command. We\u2019re telling it to package up all files in the current directory.\n\nAfter that, just run the AIR file, which will install your application and run it.\n\nImportant things to remember about AIR\n\nWhen developing an HTML application, the rendering engine is Webkit. You\u2019ll thank your lucky stars that you aren\u2019t struggling with cross-browser issues. (My personal favourites are multiple backgrounds and border radius!)\n\nBe mindful of memory leaks. Things like Ajax calls and event binding can cause applications to slowly leak memory over time. Web pages are normally short lived but desktop applications are often open for hours, if not days, and you may find your little desktop application taking up more memory than anything else on your machine!\n\nThe WebKit runtime itself can also be a memory hog, usually taking about 15MB just for itself. If you create multiple HTML windows, it\u2019ll add another 15MB to your memory footprint. Our little to-do list application shouldn\u2019t be much of a concern, though.\n\nThe other important thing to remember is that you\u2019re still essentially running within a Flash environment. While you probably won\u2019t notice this working in small applications, the moment you need to move to multiple windows or need to accomplish stuff beyond what HTML and JavaScript can give you, the need to understand some of the Flash-based elements will become more important.\n\nLastly, the other thing to remember is that HTML links will load within the AIR application. If you want a link to open in the users web browser, you\u2019ll need to capture that event and handle it on your own. The following code takes the HREF from a clicked link and opens it in the default web browser.\n\nair.navigateToURL(new air.URLRequest(this.href));\n\nOnly the beginning\n\nOf course, this is only the beginning of what you can do with Adobe AIR. You don\u2019t have the same level of control as building a native desktop application, such as being able to launch other applications, but you do have more control than what you could have within a web application. Check out the Adobe AIR Developer Center for HTML and Ajax for tutorials and other resources.\n\nNow, go forth and create your desktop applications and hopefully you finish all your shopping before Christmas!\n\nDownload the example files.", "year": "2007", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2007-12-19T00:00:00+00:00", "url": "https://24ways.org/2007/christmas-is-in-the-air/", "topic": "code"} {"rowid": 161, "title": "Keeping JavaScript Dependencies At Bay", "contents": "As we are writing more and more complex JavaScript applications we run into issues that have hitherto (god I love that word) not been an issue. The first decision we have to make is what to do when planning our app: one big massive JS file or a lot of smaller, specialised files separated by task. \n\nPersonally, I tend to favour the latter, mainly because it allows you to work on components in parallel with other developers without lots of clashes in your version control. It also means that your application will be more lightweight as you only include components on demand.\n\nStarting with a global object\n\nThis is why it is a good plan to start your app with one single object that also becomes the namespace for the whole application, say for example myAwesomeApp:\n\nvar myAwesomeApp = {};\n\nYou can nest any necessary components into this one and also make sure that you check for dependencies like DOM support right up front.\n\nAdding the components\n\nThe other thing to add to this main object is a components object, which defines all the components that are there and their file names.\n\nvar myAwesomeApp = {\n\tcomponents :{\n\t\tformcheck:{\n\t\t\turl:'formcheck.js',\n\t\t\tloaded:false\n\t\t},\n\t\tdynamicnav:{\n\t\t\turl:'dynamicnav.js',\n\t\t\tloaded:false\n\t\t},\n\t\tgallery:{\n\t\t\turl:'gallery.js',\n\t\t\tloaded:false\n\t\t},\n\t\tlightbox:{\n\t\t\turl:'lightbox.js',\n\t\t\tloaded:false\n\t\t}\n\t}\n};\n\nTechnically you can also omit the loaded properties, but it is cleaner this way. The next thing to add is an addComponent function that can load your components on demand by adding new SCRIPT elements to the head of the documents when they are needed.\n\nvar myAwesomeApp = {\n\tcomponents :{\n\t\tformcheck:{\n\t\t\turl:'formcheck.js',\n\t\t\tloaded:false\n\t\t},\n\t\tdynamicnav:{\n\t\t\turl:'dynamicnav.js',\n\t\t\tloaded:false\n\t\t},\n\t\tgallery:{\n\t\t\turl:'gallery.js',\n\t\t\tloaded:false\n\t\t},\n\t\tlightbox:{\n\t\t\turl:'lightbox.js',\n\t\t\tloaded:false\n\t\t}\n\t},\n\taddComponent:function(component){\n\t\tvar c = this.components[component];\n\t\tif(c && c.loaded === false){\n\t\t\tvar s = document.createElement('script');\n\t\t\ts.setAttribute('type', 'text/javascript');\n\t\t\ts.setAttribute('src',c.url);\n\t\t\tdocument.getElementsByTagName('head')[0].appendChild(s);\n\t\t}\n\t}\n};\n\nThis allows you to add new components on the fly when they are not defined:\n\nif(!myAwesomeApp.components.gallery.loaded){\n\tmyAwesomeApp.addComponent('gallery');\n};\n\nVerifying that components have been loaded\n\nHowever, this is not safe as the file might not be available. To make the dynamic adding of components safer each of the components should have a callback at the end of them that notifies the main object that they indeed have been loaded:\n\nvar myAwesomeApp = {\n\tcomponents :{\n\t\tformcheck:{\n\t\t\turl:'formcheck.js',\n\t\t\tloaded:false\n\t\t},\n\t\tdynamicnav:{\n\t\t\turl:'dynamicnav.js',\n\t\t\tloaded:false\n\t\t},\n\t\tgallery:{\n\t\t\turl:'gallery.js',\n\t\t\tloaded:false\n\t\t},\n\t\tlightbox:{\n\t\t\turl:'lightbox.js',\n\t\t\tloaded:false\n\t\t}\n\t},\n\taddComponent:function(component){\n\t\tvar c = this.components[component];\n\t\tif(c && c.loaded === false){\n\t\t\tvar s = document.createElement('script');\n\t\t\ts.setAttribute('type', 'text/javascript');\n\t\t\ts.setAttribute('src',c.url);\n\t\t\tdocument.getElementsByTagName('head')[0].appendChild(s);\n\t\t}\n\t},\n\tcomponentAvailable:function(component){\n\t\tthis.components[component].loaded = true;\n\t}\n}\n\nFor example the gallery.js file should call this notification as a last line:\n\nmyAwesomeApp.gallery = function(){\n\t// [... other code ...]\n}();\nmyAwesomeApp.componentAvailable('gallery');\n\nTelling the implementers when components are available\n\nThe last thing to add (actually as a courtesy measure for debugging and implementers) is to offer a listener function that gets notified when the component has been loaded:\n\nvar myAwesomeApp = {\n\tcomponents :{\n\t\tformcheck:{\n\t\t\turl:'formcheck.js',\n\t\t\tloaded:false\n\t\t},\n\t\tdynamicnav:{\n\t\t\turl:'dynamicnav.js',\n\t\t\tloaded:false\n\t\t},\n\t\tgallery:{\n\t\t\turl:'gallery.js',\n\t\t\tloaded:false\n\t\t},\n\t\tlightbox:{\n\t\t\turl:'lightbox.js',\n\t\t\tloaded:false\n\t\t}\n\t},\n\taddComponent:function(component){\n\t\tvar c = this.components[component];\n\t\tif(c && c.loaded === false){\n\t\t\tvar s = document.createElement('script');\n\t\t\ts.setAttribute('type', 'text/javascript');\n\t\t\ts.setAttribute('src',c.url);\n\t\t\tdocument.getElementsByTagName('head')[0].appendChild(s);\n\t\t}\n\t},\n\tcomponentAvailable:function(component){\n\t\tthis.components[component].loaded = true;\n\t\tif(this.listener){\n\t\t\tthis.listener(component);\n\t\t};\n\t}\n};\n\nThis allows you to write a main listener function that acts when certain components have been loaded, for example:\n\nmyAwesomeApp.listener = function(component){\n\tif(component === 'gallery'){\n\t showGallery();\n\t}\n};\n\nExtending with other components\n\nAs the main object is public, other developers can extend the components object with own components and use the listener function to load dependent components. Say you have a bespoke component with data and labels in extra files:\n\nmyAwesomeApp.listener = function(component){\n\tif(component === 'bespokecomponent'){\n\t\tmyAwesomeApp.addComponent('bespokelabels');\n\t};\n\tif(component === 'bespokelabels'){\n\t\tmyAwesomeApp.addComponent('bespokedata');\n\t};\n\tif(component === 'bespokedata'){\n\t\tmyAwesomeApp,bespokecomponent.init();\n\t};\n};\nmyAwesomeApp.components.bespokecomponent = {\n\turl:'bespoke.js',\n\tloaded:false\n};\nmyAwesomeApp.components.bespokelabels = {\n\turl:'bespokelabels.js',\n\tloaded:false\n};\nmyAwesomeApp.components.bespokedata = {\n\turl:'bespokedata.js',\n\tloaded:false\n};\nmyAwesomeApp.addComponent('bespokecomponent');\n\nFollowing this practice you can write pretty complex apps and still have full control over what is available when. You can also extend this to allow for CSS files to be added on demand.\n\nInfluences\n\nIf you like this idea and wondered if someone already uses it, take a look at the Yahoo! User Interface library, and especially at the YAHOO_config option of the global YAHOO.js object.", "year": "2007", "author": "Christian Heilmann", "author_slug": "chrisheilmann", "published": "2007-12-18T00:00:00+00:00", "url": "https://24ways.org/2007/keeping-javascript-dependencies-at-bay/", "topic": "code"} {"rowid": 146, "title": "Increase Your Font Stacks With Font Matrix", "contents": "Web pages built in plain old HTML and CSS are displayed using only the fonts installed on users\u2019 computers (@font-face implementations excepted). To enable this, CSS provides the font-family property for specifying fonts in order of preference (often known as a font stack). For example:\n\nh1 {font-family: 'Egyptienne F', Cambria, Georgia, serif}\n\nSo in the above rule, headings will be displayed in Egyptienne F. If Egyptienne F is not available then Cambria will be used, failing that Georgia or the final fallback default serif font. This everyday bit of CSS will be common knowledge among all 24 ways readers.\n\nIt is also a commonly held belief that the only fonts we can rely on being installed on users\u2019 computers are the core web fonts of Arial, Times New Roman, Verdana, Georgia and friends. But is that really true?\n\nIf you look in the fonts folder of your computer, or even your Mum\u2019s computer, then you are likely to find a whole load of fonts besides the core ones. This is because many software packages automatically install extra typefaces. For example, Office 2003 installs over 100 additional fonts. Admittedly not all of these fonts are particularly refined, and not all are suitable for the Web. However they still do increase your options. \n\nThe Matrix\n\nI have put together a matrix of (western) fonts showing which are installed with Mac and Windows operating systems, which are installed with various versions of Microsoft Office, and which are installed with Adobe Creative Suite.\n\n\n\nThe matrix is available for download as an Excel file and as a CSV. There are no readily available statistics regarding the penetration of Office or Creative Suite, but you can probably take an educated guess based on your knowledge of your readers.\n\nThe idea of the matrix is that use can use it to help construct your font stack. First of all pick the font you\u2019d really like for your text \u2013 this doesn\u2019t have to be in the matrix. Then pick the generic family (serif, sans-serif, cursive, fantasy or monospace) and a font from each of the operating systems. Then pick any suitable fonts from the Office and Creative Suite lists.\n\nFor example, you may decide your headings should be in the increasingly ubiquitous Clarendon. This is a serif type face. At OS-level the most similar is arguably Georgia. Adobe CS2 comes with Century Old Style which has a similar feel. Century Schoolbook is similar too, and is installed with all versions of Office. Based on this your font stack becomes:\n\nfont-family: 'Clarendon Std', 'Century Old Style Std', 'Century Schoolbook', Georgia, serif\n\n\n\n\n\n\nNote the \u2018Std\u2019 suffix indicating a \u2018standard\u2019 OpenType file, which will normally be your best bet for more esoteric fonts.\n\nI\u2019m not suggesting the process of choosing suitable fonts is an easy one. Firstly there are nearly two hundred fonts in the matrix, so learning what each font looks like is tricky and potentially time consuming (if you haven\u2019t got all the fonts installed on a machine to hand you\u2019ll be doing a lot of Googling for previews). And it\u2019s not just as simple as choosing fonts that look similar or have related typographic backgrounds, they need to have similar metrics as well, This is especially true in terms of x-height which gives an indication of how big or small a font looks.\n\nOver to You\n\nThe main point of all this is that there are potentially more fonts to consider than is generally accepted, so branch out a little (carefully and tastefully) and bring a little variety to sites out there. If you come up with any novel font stacks based on this approach, please do blog them (tagged as per the footer) and at some point they could all be combined in one place for everyone to consider.\n\nAppendix\n\nWhat about Linux?\n\nThe only operating systems in the matrix are those from Microsoft and Apple. For completeness, Linux operating systems should be included too, although these are many and varied and very much in a minority, so I omitted them for time being. For the record, some Linux distributions come packaged with Microsoft\u2019s core fonts. Others use the Vera family, and others use the Liberation family which comprises fonts metrically identical to Times New Roman and Arial.\n\nSources\n\nThe sources of font information for the matrix are as follows:\n\n\n\tWindows XP SP2\n\tWindows Vista\n\tOffice 2003\n\tOffice 2007\n\tMac OSX Tiger\n\tMac OSX Leopard (scroll down two thirds)\n\tOffice 2004 (Mac) by inspecting my Microsoft Office 2004/Office/Fonts folder\n\tOffice 2008 (Mac) is expected to be as Office 2004 with the addition of the Vista ClearType fonts\n\tCreative Suite 2 (see pdf link in first comment)\n\tCreative Suite 3", "year": "2007", "author": "Richard Rutter", "author_slug": "richardrutter", "published": "2007-12-17T00:00:00+00:00", "url": "https://24ways.org/2007/increase-your-font-stacks-with-font-matrix/", "topic": "design"} {"rowid": 151, "title": "Get In Shape", "contents": "Pop quiz: what\u2019s wrong with the following navigation?\n\n\n\nMaybe nothing. But then again, maybe there\u2019s something bugging you about the way it comes together, something you can\u2019t quite put your finger on. It seems well-designed, but it also seems a little\u2026 off.\n\nThe design decisions that led to this eventual form were no doubt well-considered:\n\n\n\tClient: The top level needs to have a \u201ccurrent page\u201d status indicator of some sort.\n\tDesigner: How about a white tab?\n\tClient: Great! The second level needs to show up underneath the first level though\u2026\n\tDesigner: Okay, but that white tab I just added makes it hard to visually connect the bottom nav to the top.\n\tClient: Too late, we\u2019ve seen the white tab and we love it. Try and make it work.\n\tDesigner: Right. So I placed the second level in its own box.\n\tClient: Hmm. They seem too separated. I can\u2019t tell that the yellow nav is the second level of the first.\n\tDesigner: How about an indicator arrow?\n\tClient: Brilliant!\n\n\nThe problem is that the end result feels awkward and forced. During the design process, little decisions were made that ultimately affect the overall shape of the navigation. What started out as a neatly contained rounded rectangle ended up as an ambiguous double shape that looks funny, though it\u2019s often hard to pinpoint precisely why.\n\nThe Shape of Things\n\nWell the why in this case is because seemingly unrelated elements in a design still end up visually interacting. Adding a new item to a page impacts everything surrounding it. In this navigation example, we\u2019re looking at two individual objects that are close enough to each other that they form a relationship; if we reduce them to strictly their outlines, it\u2019s a little easier to see that this particular combination registers oddly.\n\n\n\nThe two shapes float with nothing really grounding them. If they were connected, perhaps it would be a different story. The white tab divides the top shape in half, leaving a gap in the middle of it. There\u2019s very little balance in this pairing because the overall shape of the navigation wasn\u2019t considered during the design process.\n\nHere\u2019s another example: Gmail. Great email client, but did you ever closely look at what\u2019s going on in that left hand navigation? The continuous blue bar around the message area spills out into the navigation. If we remove all text, we\u2019re left with this odd configuration:\n\n\n\nThough the reasoning for anchoring the navigation highlight against the message area might be sound, the result is an irregular shape that doesn\u2019t correspond with anything in reality. You may never consciously notice it, but once you do it\u2019s hard to miss. One other example courtesy of last.fm:\n\n\n\nThe two header areas are the same shade of pink so they appear to be closely connected. When reduced to their outlines it\u2019s easy to see that this combination is off-balance: the edges don\u2019t align, the sharp corners of the top shape aren\u2019t consistent with the rounded corners of the bottom, and the part jutting out on the right of the bottom one seems fairly random. The result is a duo of oddly mis-matched shapes.\n\nDesign Strategies\n\nOur minds tend to pick out familiar patterns. A clever designer can exploit this by creating references in his or her work to shapes and combinations with which viewers are already familiar. There are a few simple ideas that can be employed to help you achieve this: consistency, balance, and completion.\n\nConsistency\n\nA fairly simple way to unify the various disparate shapes on a page is by designing them with a certain amount of internal consistency. You don\u2019t need to apply an identical size, colour, border, or corner treatment to every single shape; devolving a design into boring repetition isn\u2019t what we\u2019re after here. But it certainly doesn\u2019t hurt to apply a set of common rules to most shapes within your work.\n\n\n\nConsider purevolume and its multiple rounded-corner panels. From the bottom of the site\u2019s main navigation to the grey \u201cExtras\u201d panels halfway down the page (shown above), multiple shapes use a common border radius for unity. Different colours, different sizes, different content, but the consistent outlines create a strong sense of similarity. Not that every shape on the site follows this rule; they break the pattern right at the top with a darker sharp-cornered header, and again with the thumbnails below. But the design remains unified, nonetheless.\n\nBalance\n\nArguably the biggest problem with the last.fm example earlier is one of balance. The area poking out of the bottom shape created a fairly obvious imbalance for no apparent reason. The right hand side is visually emphasized due to the greater area of pink coverage, but with the white gap left beside it, the emphasis seems unwarranted. It\u2019s possible to create tension in your design by mismatching shapes and throwing off the balance, but when that happens unintentionally it can look like a mistake.\n\n\n\nAbove are a few examples of design elements in balanced and unbalanced configurations. The examples in the top row are undeniably more pleasing to the eye than those in the bottom row. If these were fleshed out into full designs, those derived from the templates in the top row would naturally result in stronger work.\n\nTake a look at the header on 9Rules for a study in well-considered balance. On the left you\u2019ll see a couple of paragraphs of text, on the right you have floating navigational items, and both flank the site\u2019s logo. This unusual layout combines multiple design elements that look nothing alike, and places them together in a way that anchors each so that no one weighs down the header.\n\nCompletion\n\nAnd finally we come to the idea of completion. Shapes don\u2019t necessarily need hard outlines to be read visually as shapes, which can be exploited for various purposes. Notice how Zend\u2019s mid-page \u201cBusiness Topics\u201d and \u201cNews\u201d items (below) fade out to the right and bottom, but the placement of two of these side-by-side creates an impression of two panels rather than three disparate floating columns. By allowing the viewer\u2019s eye to complete the shapes, they\u2019ve lightened up the design of the page and removed inessential lines. In a busy design this technique could prove quite handy.\n\n\n\nAlong the same lines, the individual shapes within your design may also be combined visually to form outlines of larger shapes. The differently-coloured header and main content/sidebar shapes on Veerle\u2019s blog come together to form a single central panel, further emphasized by the slight drop shadow to the right.\n\nImplementation\n\nStudying how shape can be used effectively in design is simply a starting point. As with all things design-related, there are no hard and fast rules here; ultimately you may choose to bring these principles into your work more often, or break them for effect. But understanding how shapes interact within a page, and how that effect is ultimately perceived by viewers, is a key design principle you can use to impress your friends.", "year": "2007", "author": "Dave Shea", "author_slug": "daveshea", "published": "2007-12-16T00:00:00+00:00", "url": "https://24ways.org/2007/get-in-shape/", "topic": "design"} {"rowid": 162, "title": "Conditional Love", "contents": "\u201cBrowser.\u201d The four-letter word of web design.\n\nI mean, let\u2019s face it: on the good days, when things just work in your target browsers, it\u2019s marvelous. The air smells sweeter, birds\u2019 songs sound more melodious, and both your design and your code are looking sharp.\n\nBut on the less-than-good days (which is, frankly, most of them), you\u2019re compelled to tie up all your browsers in a sack, heave them into the nearest river, and start designing all-imagemap websites. We all play favorites, after all: some will swear by Firefox, Opera fans are allegedly legion, and others still will frown upon anything less than the latest WebKit nightly.\n\nThankfully, we do have an out for those little inconsistencies that crop up when dealing with cross-browser testing: CSS patches.\n\nSpare the Rod, Hack the Browser\n\nBefore committing browsercide over some rendering bug, a designer will typically reach for a snippet of CSS fix the faulty browser. Historically referred to as \u201chacks,\u201d I prefer Dan Cederholm\u2019s more client-friendly alternative, \u201cpatches\u201d.\n\nBut whatever you call them, CSS patches all work along the same principle: supply the proper property value to the good browsers, while giving higher maintenance other browsers an incorrect value that their frustrating idiosyncratic rendering engine can understand.\n\nTraditionally, this has been done either by exploiting incomplete CSS support:\n\n#content {\n\theight: 1%;\t // Let's force hasLayout for old versions of IE.\n\tline-height: 1.6;\n\tpadding: 1em;\n}\nhtml>body #content {\n\theight: auto; // Modern browsers get a proper height value.\n}\n\nor by exploiting bugs in their rendering engine to deliver alternate style rules:\n\n#content p {\n\tfont-size: .8em;\n\t/* Hide from Mac IE5 \\*/\n\tfont-size: .9em;\n\t/* End hiding from Mac IE5 */\n}\n\nWe\u2019ve even used these exploits to serve up whole stylesheets altogether:\n\n@import url(\"core.css\");\n@media tty {\n\ti{content:\"\\\";/*\" \"*/}} @import 'windows-ie5.css'; /*\";}\n}/* */\n\nThe list goes on, and on, and on. For every browser, for every bug, there\u2019s a patch available to fix some rendering bug.\n\nBut after some time working with standards-based layouts, I\u2019ve found that CSS patches, as we\u2019ve traditionally used them, become increasingly difficult to maintain. As stylesheets are modified over the course of a site\u2019s lifetime, inline fixes we\u2019ve written may become obsolete, making them difficult to find, update, or prune out of our CSS. A good patch requires a constant gardener to ensure that it adds more than just bloat to a stylesheet, and inline patches can be very hard to weed out of a decently sized CSS file.\n\nGiving the Kids Separate Rooms\n\nSince I joined Airbag Industries earlier this year, every project we\u2019ve worked on has this in the head of its templates:\n\n\n\n\n\nThe first element is, simply enough, a link element that points to the project\u2019s main CSS file. No patches, no hacks: just pure, modern browser-friendly style rules. Which, nine times out of ten, will net you a design that looks like spilled eggnog in various versions of Internet Explorer.\n\nBut don\u2019t reach for the mulled wine quite yet. Immediately after, we\u2019ve got a brace of conditional comments wrapped around two other link elements. These odd-looking comments allow us to selectively serve up additional stylesheets just to the version of IE that needs them. We\u2019ve got one for IE 6 and below:\n\n\n\nAnd another for IE7 and above:\n\n\n\nMicrosoft\u2019s conditional comments aren\u2019t exactly new, but they can be a valuable alternative to cooking CSS patches directly into a master stylesheet. And though they\u2019re not a W3C-approved markup structure, I think they\u2019re just brilliant because they innovate within the spec: non-IE devices will assume that the comments are just that, and ignore the markup altogether.\n\nThis does, of course, mean that there\u2019s a little extra markup in the head of our documents. But this approach can seriously cut down on the unnecessary patches served up to the browsers that don\u2019t need them. Namely, we no longer have to write rules like this in our main stylesheet:\n\n#content {\n\theight: 1%;\t// Let's force hasLayout for old versions of IE.\n\tline-height: 1.6;\n\tpadding: 1em;\n}\nhtml>body #content {\n\theight: auto;\t// Modern browsers get a proper height value.\n}\n\nRather, we can simply write an un-patched rule in our core stylesheet:\n\n#content {\n\tline-height: 1.6;\n\tpadding: 1em;\n}\n\nAnd now, our patch for older versions of IE goes in\u2014you guessed it\u2014the stylesheet for older versions of IE:\n\n#content {\n\theight: 1%;\n}\n\nThe hasLayout patch is applied, our design\u2019s repaired, and\u2014most importantly\u2014the patch is only seen by the browser that needs it. The \u201cgood\u201d browsers don\u2019t have to incur any added stylesheet weight from our IE patches, and Internet Explorer gets the conditional love it deserves.\n\nMost importantly, this \u201ccompartmentalized\u201d approach to CSS patching makes it much easier for me to patch and maintain the fixes applied to a particular browser. If I need to track down a bug for IE7, I don\u2019t need to scroll through dozens or hundreds of rules in my core stylesheet: instead, I just open the considerably slimmer IE7-specific patch file, make my edits, and move right along.\n\nEven Good Children Misbehave\n\nWhile IE may occupy the bulk of our debugging time, there\u2019s no denying that other popular, modern browsers will occasionally disagree on how certain bits of CSS should be rendered. But without something as, well, pimp as conditional comments at our disposal, how do we bring the so-called \u201cgood browsers\u201d back in line with our design?\n\nAssuming you\u2019re loving the \u201cone patch file per browser\u201d model as much as I do, there\u2019s just one alternative: JavaScript.\n\nfunction isSaf() {\n\tvar isSaf = (document.childNodes && !document.all && !navigator.taintEnabled && !navigator.accentColorName) ? true : false;\n\treturn isSaf;\n}\nfunction isOp() {\n\tvar isOp = (window.opera) ? true : false;\n\treturn isOp;\n}\n\nInstead of relying on dotcom-era tactics of parsing the browser\u2019s user-agent string, we\u2019re testing here for support for various DOM objects, whose presence or absence we can use to reasonably infer the browser we\u2019re looking at. So running the isOp() function, for example, will test for Opera\u2019s proprietary window.opera object, and thereby accurately tell you if your user\u2019s running Norway\u2019s finest browser.\n\nWith scripts such as isOp() and isSaf() in place, you can then reasonably test which browser\u2019s viewing your content, and insert additional link elements as needed.\n\nfunction loadPatches(dir) {\n\tif (document.getElementsByTagName() && document.createElement()) {\n\t\tvar head = document.getElementsByTagName(\"head\")[0];\n\t\tif (head) {\n\t\t\tvar css = new Array();\n\t\t\tif (isSaf()) {\n\t\t\t\tcss.push(\"saf.css\");\n\t\t\t} else if (isOp()) {\n\t\t\t\tcss.push(\"opera.css\");\n\t\t\t}\n\t\t\tif (css.length) {\n\t\t\t\tvar link = document.createElement(\"link\");\n\t\t\t\tlink.setAttribute(\"rel\", \"stylesheet\");\n\t\t\t\tlink.setAttribute(\"type\", \"text/css\");\n\t\t\t\tlink.setAttribute(\"media\", \"screen, projection\");\n\t\t\t\tfor (var i = 0; i < css.length; i++) {\n\t\t\t\t\tvar tag = link.cloneNode(true);\n\t\t\t\t\ttag.setAttribute(\"href\", dir + css[0]);\n\t\t\t\t\thead.appendChild(tag);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\nHere, we\u2019re testing the results of isSaf() and isOp(), one after the other. For each function that returns true, then the name of a new stylesheet is added to the oh-so-cleverly named css array. Then, for each entry in css, we create a new link element, point it at our patch file, and insert it into the head of our template.\n\nFire it up using your favorite onload or DOMContentLoaded function, and you\u2019re good to go.\n\nScripteat Emptor\n\nAt this point, some of the audience\u2019s more conscientious \u2018scripters may be preparing to lob figgy pudding at this author\u2019s head. And that\u2019s perfectly understandable; relying on JavaScript to patch CSS chafes a bit against the normally clean separation we have between our pages\u2019 content, presentation, and behavior layers.\n\nAnd beyond the philosophical concerns, this approach comes with a few technical caveats attached:\n\nBrowser detection? So un-133t.\n\nBrowser detection is not something I\u2019d typically recommend. Whenever possible, a proper DOM script should check for the support of a given object or method, rather than the device with which your users view your content.\n\nIt\u2019s JavaScript, so don\u2019t count on it being available.\n\nAccording to one site, roughly four percent of Internet users don\u2019t have JavaScript enabled. Your site\u2019s stats might be higher or lower than this number, but still: don\u2019t expect that every member of your audience will see these additional stylesheets, and ensure that your content\u2019s still accessible with JS turned off.\n\nBe a constant gardener.\n\nThe sample isSaf() and isOp() functions I\u2019ve written will tell you if the user\u2019s browser is Safari or Opera. As a result, stylesheets written to patch issues in an old browser may break when later releases repair the relevant CSS bugs.\n\nYou can, of course, add logic to these simple little scripts to serve up version-specific stylesheets, but that way madness may lie. In any event, test your work vigorously, and keep testing it when new versions of the targeted browsers come out. Make sure that a patch written today doesn\u2019t become a bug tomorrow.\n\nPatching Firefox, Opera, and Safari isn\u2019t something I\u2019ve had to do frequently: still, there have been occasions where the above script\u2019s come in handy. Between conditional comments, careful CSS auditing, and some judicious JavaScript, browser-based bugs can be handled with near-surgical precision.\n\nSo pass the \u2018nog. It\u2019s patchin\u2019 time.", "year": "2007", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2007-12-15T00:00:00+00:00", "url": "https://24ways.org/2007/conditional-love/", "topic": "code"} {"rowid": 149, "title": "Underpants Over My Trousers", "contents": "With Christmas approaching faster than a speeding bullet, this is the perfect time for you to think about that last minute present to buy for the web geek in your life. If you\u2019re stuck for ideas for that special someone, forget about that svelte iPhone case carved from solid mahogany and head instead to your nearest comic-book shop and pick up a selection of comics or graphic novels. (I\u2019ll be using some of my personal favourite comic books as examples throughout). \n\nTrust me, whether your nearest and dearest has been reading comics for a while or has never peered inside this four-colour world, they\u2019ll thank-you for it.\n\nAside from indulging their superhero fantasies, comic books can provide web designers with a rich vein of inspiring ideas and material to help them create shirt button popping, trouser bursting work for the web. I know from my own personal experience, that looking at aspects of comic book design, layout and conventions and thinking about the ways that they can inform web design has taken my design work in often-unexpected directions. \n\nThere are far too many fascinating facets of comic book design that provide web designers with inspiration to cover in the time that it takes to pull your underpants over your trousers. So I\u2019m going to concentrate on one muscle bound aspect of comic design, one that will make you think differently about how you lay out the content of your pages in panels. \n\nA suitcase full of Kryptonite\n\nNow, to the uninitiated onlooker, the panels of a comic book may appear to perform a similar function to still frames from a movie. But inside the pages of a comic, panels must work harder to help the reader understand the timing of a story. It is this method for conveying narrative timing to a reader that I believe can be highly useful to designers who work on the web as timing, drama and suspense are as important in the web world as they are in worlds occupied by costumed crime fighters and superheroes.\n\nI\u2019d like you to start by closing your eyes and thinking about your own process for laying out panels of content on a page. OK, you\u2019ll actually be better off with your eyes open if you\u2019re going to carry on reading.\n\nI\u2019ll bet you a suitcase full of Kryptonite that you often, if not always, structure your page layouts, and decide on the dimensions of those panels according to either:\n\n\n\tThe base grid that you are working to\n\tThe Golden Ratio or another mathematical schema\n\n\nMore likely, I bet that you decide on the size and the number of your panels based on the amount of content that will be going into them. From today, I\u2019d like you to think about taking a different approach. This approach not only addresses horizontal and vertical space, but also adds the dimension of time to your designs.\n\nSlowing down the action\n\nA comic book panel not only acts as a container for its content but also indicates to a reader how much time passes within the panel and as a result, how much time the reader should focus their attention on that one panel. \n\nSmaller panels create swift eye movement and shorter bursts of attention. Larger panels give the perception of more time elapsing in the story and subconsciously demands that a reader devotes more time to focus on it. \n\nConcrete by Paul Chadwick (Dark Horse Comics)\n\nThis use of panel dimensions to control timing can also be useful for web designers in designing the reading/user experience. Imagine a page full of information about a product or service. You\u2019ll naturally want the reader to focus for longer on the key benefits of your offering rather than perhaps its technical specifications.\n\nNow take a look at this spread of pages from Watchmen by Alan Moore and Dave Gibbons.\n\nWatchmen by Alan Moore and Dave Gibbons (Diamond Comic Distributors 2004)\n\nThroughout this series of (originally) twelve editions, artist Dave Gibbons stuck rigidly to his 3\u00d73 panels per page design and deviated from it only for dramatic moments within the narrative. \n\nIn particular during the last few pages of chapter eleven, Gibbons adds weight to the impending doom by slowing down the action by using larger panels and forces the reader to think longer about what was coming next. The action then speeds up through twelve smaller panels until the final panel: nothing more than white space and yet one of the most iconic and thought provoking in the entire twelve book series.\n\nWatchmen by Alan Moore and Dave Gibbons (Diamond Comic Distributors 2004)\n\nOn the web it is common for clients to ask designers to fill every pixel of screen space with content, perhaps not understanding the drama that can be added by nothing more than white space.\n\nIn the final chapter, Gibbons emphasises the carnage that has taken place (unseen between chapters eleven and twelve) by presenting the reader with six full pages containing only single, large panels. \n\nWatchmen by Alan Moore and Dave Gibbons (Diamond Comic Distributors 2004)\n\nThis drama, created by the artist\u2019s use of panel dimensions to control timing, is a technique that web designers can also usefully employ when emphasising important areas of content.\n\nThink back for a moment to the home page of Apple Inc., during the launch of their iconic iPhone, where the page contained nothing more than a large image and the phrase \u201cSay hello to iPhone\u201d. Rather than fill the page with sales messages, Apple\u2019s designers allowed the space itself to tell the story and created a real sense of suspense and expectation among their readers.\n\nBorders\n\nWhereas on the web, panel borders are commonly used to add emphasis to particular areas of content, in comic books they take on a different and sometimes opposite role. \n\nIn the examples so far, borders have contained all of the action. Removing a border can have the opposite effect to what you might at first think. Rather than taking emphasis away from their content, in comics, borderless panels allow the reader\u2019s eyes to linger for longer on the content adding even stronger emphasis.\n\nConcrete by Paul Chadwick (Dark Horse Comics)\n\nThis effect is amplified when the borderless content is allowed to bleed to the edges of a page. Because the content is no longer confined, except by the edges of the page (both comic and web) the reader\u2019s eye is left to wander out into open space. \n\nConcrete by Paul Chadwick (Dark Horse Comics)\n\nThis type of open, borderless content panel can be highly useful in placing emphasis on the most important content on a page in exactly the very opposite way that we commonly employ on the web today. \n\nSo why is time an important dimension to think about when designing your web pages? On one level, we are often already concerned with the short attention spans of visitors to our pages and should work hard to allow them to quickly and easily find and read the content that both they and we think is important. Learning lessons from comic book timing can only help us improve that experience.\n\nOn another: timing, suspense and drama are already everyday parts of the web browsing experience. Will a reader see what they expect when they click from one page to the next? Or are they in for a surprise? \n\nMost importantly, I believe that the web, like comics, is about story telling: often the story of the experiences that a customer will have when they use our product or service or interact with our organisation. It is this element of story telling than can be greatly improved by learning from comics.\n\nIt is exactly this kind of learning and adapting from older, more established and at first glance unrelated media that you will find can make a real distinctive difference to the design work that you create.\n\nFill your stockings\n\nIf you\u2019re a visual designer or developer and are not a regular reader of comics, from the moment that you pick up your first title, I know that you will find them inspiring. \n\nI will be writing more, and speaking about comic design applied to the web at several (to be announced) events this coming year. I hope you\u2019ll be slipping your underpants over your trousers and joining me then. In the meantime, here is some further reading to pick up on your next visit to a comic book or regular bookshop and slip into your stockings:\n\n\n\tComics and Sequential Art by Will Eisner (Northern Light Books 2001)\n\tUnderstanding Comics: The Invisible Art by Scott McCloud (Harper Collins 1994)\n\n\nHave a happy superhero season.\n\n(I would like to thank all of the talented artists, writers and publishers whose work I have used as examples in this article and the hundreds more who inspire me every day with their tall tales and talent.)", "year": "2007", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2007-12-14T00:00:00+00:00", "url": "https://24ways.org/2007/underpants-over-my-trousers/", "topic": "design"} {"rowid": 152, "title": "CSS for Accessibility", "contents": "CSS is magical stuff. In the right hands, it can transform the plainest of (well-structured) documents into a visual feast. But it\u2019s not all fur coat and nae knickers (as my granny used to say). Here are some simple ways you can use CSS to improve the usability and accessibility of your site. \n\nEven better, no sexy visuals will be harmed by the use of these techniques. Promise.\n\nNae knickers\n\nThis is less of an accessibility tip, and more of a reminder to check that you\u2019ve got your body background colour specified.\n\nIf you\u2019re sitting there wondering why I\u2019m mentioning this, because it\u2019s a really basic thing, then you might be as surprised as I was to discover that from a sample of over 200 sites checked last year, 35% of UK local authority websites were missing their body background colour.\n\nForgetting to specify your body background colour can lead to embarrassing gaps in coverage, which are not only unsightly, but can prevent your users reading the text on your site if they use a different operating system colour scheme.\n\nAll it needs is the following line to be added to your CSS file:\n\nbody {background-color: #fff;}\n\nIf you pair it with \n\ncolor: #000;\n\n\u2026 you\u2019ll be assured of maintaining contrast for any areas you inadvertently forget to specify, no matter what colour scheme your user needs or prefers.\n\nEven better, if you\u2019ve got standard reset CSS you use, make sure that default colours for background and text are specified in it, so you\u2019ll never be caught with your pants down. At the very least, you\u2019ll have a white background and black text that\u2019ll prompt you to change them to your chosen colours.\n\nElbow room\n\nPaying attention to your typography is important, but it\u2019s not just about making it look nice. \n\nCareful use of the line-height property can make your text more readable, which helps everyone, but is particularly helpful for those with dyslexia, who use screen magnification or simply find it uncomfortable to read lots of text online. \n\nWhen lines of text are too close together, it can cause the eye to skip down lines when reading, making it difficult to keep track of what you\u2019re reading across. \n\nSo, a bit of room is good.\n\nThat said, when lines of text are too far apart, it can be just as bad, because the eye has to jump to find the next line. That not only breaks up the reading rhythm, but can make it much more difficult for those using Screen Magnification (especially at high levels of magnification) to find the beginning of the next line which follows on from the end of the line they\u2019ve just read.\n\nUsing a line height of between 1.2 and 1.6 times normal can improve readability, and using unit-less line heights help take care of any pesky browser calculation problems.\n\nFor example: \n\np {\n\tfont-family: \"Lucida Grande\", Lucida, Verdana, Helvetica, sans-serif;\n\tfont-size: 1em;\n\tline-height: 1.3;\n}\n\nor if you want to use the shorthand version:\n\np {\n\tfont: 1em/1.3 \"Lucida Grande\", Lucida, Verdana, Helvetica, sans-serif;\n}\n\nView some examples of different line-heights, based on default text size of 100%/1em.\n\nFurther reading on Unitless line-heights from Eric Meyer.\n\nTransformers: Initial case in disguise\n\nNobody wants to shout at their users, but there are some occasions when you might legitimately want to use uppercase on your site.\n\nAvoid screen-reader pronunciation weirdness (where, for example, CONTACT US would be read out as Contact U S, which is not the same thing \u2013 unless you really are offering your users the chance to contact the United States) caused by using uppercase by using title case for your text and using the often neglected text-transform property to fake uppercase.\n\nFor example:\n\n.uppercase {\n\ttext-transform: uppercase\n} \n\nDon\u2019t overdo it though, as uppercase text is harder to read than normal text, not to mention the whole SHOUTING thing.\n\nLinky love\n\nWhen it comes to accessibility, keyboard only users (which includes those who use voice recognition software) who can see just fine are often forgotten about in favour of screen reader users.\n\nThis Christmas, share the accessibility love and light up those links so all of your users can easily find their way around your site.\n\nThe link outline\n\nAKA: the focus ring, or that dotted box that goes around links to show users where they are on the site.\n\nThe techniques below are intended to supplement this, not take the place of it. You may think it\u2019s ugly and want to get rid of it, especially since you\u2019re going to the effort of tarting up your links.\n\nDon\u2019t. \n\nJust don\u2019t.\n\nThe non-underlined underline\n\nIf you listen to Jacob Nielsen, every link on your site should be underlined so users know it\u2019s a link.\n\nYou might disagree with him on this (I know I do), but if you are choosing to go with underlined links, in whatever state, then remove the default underline and replacing it with a border that\u2019s a couple of pixels away from the text. \n\nThe underline is still there, but it\u2019s no longer cutting off the bottom of letters with descenders (e.g., g and y) which makes it easier to read.\n\nThis is illustrated in Examples 1 and 2.\n\nYou can modify the three lines of code below to suit your own colour and border style preferences, and add it to whichever link state you like.\n\ntext-decoration: none;\nborder-bottom: 1px #000 solid;\npadding-bottom: 2px;\n\nStanding out from the crowd\n\nWhatever way you choose to do it, you should be making sure your links stand out from the crowd of normal text which surrounds them when in their default state, and especially in their hover or focus states.\n\nA good way of doing this is to reverse the colours when on hover or focus.\n\nWell-focused\n\nEveryone knows that you can use the :hover pseudo class to change the look of a link when you mouse over it, but, somewhat ironically, people who can see and use a mouse are the group who least need this extra visual clue, since the cursor handily (sorry) changes from an arrow to a hand.\n\nSo spare a thought for the non-pointing device users that visit your site and take the time to duplicate that hover look by using the :focus pseudo class.\n\nOf course, the internets being what they are, it\u2019s not quite that simple, and predictably, Internet Explorer is the culprit once more with it\u2019s frustrating lack of support for :focus. Instead it applies the :active pseudo class whenever an anchor has focus.\n\nWhat this means in practice is that if you want to make your links change on focus as well as on hover, you need to specify focus, hover and active.\n\nEven better, since the look and feel necessarily has to be the same for the last three states, you can combine them into one rule.\n\nSo if you wanted to do a simple reverse of colours for a link, and put it together with the non-underline underlines from before, the code might look like this:\n\na:link {\n\tbackground: #fff;\n\tcolor: #000;\n\tfont-weight: bold;\n\ttext-decoration: none; \n\tborder-bottom: 1px #000 solid; \n\tpadding-bottom: 2px;\n}\na:visited {\n\tbackground: #fff;\n\tcolor: #800080;\n\tfont-weight: bold;\n\ttext-decoration: none; \n\tborder-bottom: 1px #000 solid; \n\tpadding-bottom: 2px;\n}\na:focus, a:hover, a:active {\n\tbackground: #000;\n\tcolor: #fff;\n\tfont-weight: bold;\n\ttext-decoration: none; \n\tborder-bottom: 1px #000 solid; \n\tpadding-bottom: 2px;\n}\n\nExample 3 shows what this looks like in practice.\n\nLocation, Location, Location\n\nTo take this example to it\u2019s natural conclusion, you can add an id of current (or something similar) in appropriate places in your navigation, specify a full set of link styles for current, and have a navigation which, at a glance, lets users know which page or section they\u2019re currently in.\n\nExample navigation using location indicators.\n\nand the source code\n\nConclusion\n\nAll the examples here are intended to illustrate the concepts, and should not be taken as the absolute best way to format links or style navigation bars \u2013 that\u2019s up to you and whatever visual design you\u2019re using at the time.\n\nThey\u2019re also not the only things you should be doing to make your site accessible. \n\nAbove all, remember that accessibility is for life, not just for Christmas.", "year": "2007", "author": "Ann McMeekin", "author_slug": "annmcmeekin", "published": "2007-12-13T00:00:00+00:00", "url": "https://24ways.org/2007/css-for-accessibility/", "topic": "design"} {"rowid": 168, "title": "Unobtrusively Mapping Microformats with jQuery", "contents": "Microformats are everywhere. You can\u2019t shake an electronic stick these days without accidentally poking a microformat-enabled site, and many developers use microformats as a matter of course. And why not? After all, why invent your own class names when you can re-use pre-defined ones that give your site extra functionality for free?\n\nNevertheless, while it\u2019s good to know that users of tools such as Tails and Operator will derive added value from your shiny semantics, it\u2019s nice to be able to reuse that effort in your own code.\n\nWe\u2019re going to build a map of some of my favourite restaurants in Brighton. Fitting with the principles of unobtrusive JavaScript, we\u2019ll start with a semantically marked up list of restaurants, then use JavaScript to add the map, look up the restaurant locations and plot them as markers.\n\nWe\u2019ll be using a couple of powerful tools. The first is jQuery, a JavaScript library that is ideally suited for unobtrusive scripting. jQuery allows us to manipulate elements on the page based on their CSS selector, which makes it easy to extract information from microformats.\n\nThe second is Mapstraction, introduced here by Andrew Turner a few days ago. We\u2019ll be using Google Maps in the background, but Mapstraction makes it easy to change to a different provider if we want to later.\n\nGetting Started\n\nWe\u2019ll start off with a simple collection of microformatted restaurant details, representing my seven favourite restaurants in Brighton. The full, unstyled list can be seen in restaurants-plain.html. Each restaurant listing looks like this:\n\n
  • \n\t

    Riddle & Finns

    \n\t
    \n\t\t

    12b Meeting House Lane

    \n\t\t

    Brighton, UK

    \n\t\t

    BN1 1HB

    \n\t
    \n\t

    Telephone: +44 (0)1273 323 008

    \n\t

    E-mail: info@riddleandfinns.co.uk

    \n
  • \n\nSince we\u2019re dealing with a list of restaurants, each hCard is marked up inside a list item. Each restaurant is an organisation; we signify this by placing the classes fn and org on the element surrounding the restaurant\u2019s name (according to the hCard spec, setting both fn and org to the same value signifies that the hCard represents an organisation rather than a person).\n\nThe address information itself is contained within a div of class adr. Note that the HTML
    element is not suitable here for two reasons: firstly, it is intended to mark up contact details for the current document rather than generic addresses; secondly, address is an inline element and as such cannot contain the paragraphs elements used here for the address information.\n\nA nice thing about microformats is that they provide us with automatic hooks for our styling. For the moment we\u2019ll just tidy up the whitespace a bit; for more advanced style tips consult John Allsop\u2019s guide from 24 ways 2006.\n\n.vcard p {\n\tmargin: 0;\n}\n.adr {\n\tmargin-bottom: 0.5em;\n}\n\nTo plot the restaurants on a map we\u2019ll need latitude and longitude for each one. We can find this out from their address using geocoding. Most mapping APIs include support for geocoding, which means we can pass the API an address and get back a latitude/longitude point. Mapstraction provides an abstraction layer around these APIs which can be included using the following script tag:\n\n\n\nWhile we\u2019re at it, let\u2019s pull in the other external scripts we\u2019ll be using:\n\n\n\n\n\n\nThat\u2019s everything set up: let\u2019s write some JavaScript!\n\nIn jQuery, almost every operation starts with a call to the jQuery function. The function simulates method overloading to behave in different ways depending on the arguments passed to it. When writing unobtrusive JavaScript it\u2019s important to set up code to execute when the page has loaded to the point that the DOM is available to be manipulated. To do this with jQuery, pass a callback function to the jQuery function itself:\n\njQuery(function() {\n\t// This code will be executed when the DOM is ready\n});\n\nInitialising the map\n\nThe first thing we need to do is initialise our map. Mapstraction needs a div with an explicit width, height and ID to show it where to put the map. Our document doesn\u2019t currently include this markup, but we can insert it with a single line of jQuery code:\n\njQuery(function() {\n\t// First create a div to host the map\n\tvar themap = jQuery('
    ').css({\n\t\t'width': '90%',\n\t\t'height': '400px'\n\t}).insertBefore('ul.restaurants');\n});\n\nWhile this is technically just a single line of JavaScript (with line-breaks added for readability) it\u2019s actually doing quite a lot of work. Let\u2019s break it down in to steps:\n\nvar themap = jQuery('
    ')\n\nHere\u2019s jQuery\u2019s method overloading in action: if you pass it a string that starts with a < it assumes that you wish to create a new HTML element. This provides us with a handy shortcut for the more verbose DOM equivalent:\n\nvar themap = document.createElement('div');\nthemap.id = 'themap';\n\nNext we want to apply some CSS rules to the element. jQuery supports chaining, which means we can continue to call methods on the object returned by jQuery or any of its methods:\n\nvar themap = jQuery('
    ').css({\n\t'width': '90%',\n\t'height': '400px'\n})\n\nFinally, we need to insert our new HTML element in to the page. jQuery provides a number of methods for element insertion, but in this case we want to position it directly before the