24ways

Custom SQL query returning 101 rows (hide)

rowidtitlecontentsyearauthorauthor_slugpublishedurltopic
13 Data-driven Design with an Annual Survey Too often, we base designs on assumptions that don’t match customer perspectives. Why? Because the data we need to make informed decisions isn’t available. Imagine starting off the year with a treasure trove of user data that can be filtered, sliced, and diced to inform new UI designs, help you discover where users struggle the most, and expose emerging trends in your customers’ needs that could lead to new features. Why, that would be useful indeed. And it’s easy to obtain by conducting an annual survey. Annual surveys may seem as exciting as receiving socks and undies for Christmas, but they’re the gift that keeps on giving all year long (just like fresh socks and undies). I’m not ashamed to admit it: I love surveys! Each time my design research team runs a survey, we learn so much about customer motivations, interests, and behaviors. Surveys provide an aggregate snapshot of your users that can’t easily be obtained by other research methods, and they can be conducted quickly too. You can build a survey in a few hours, run a pilot test in a day, and have real results streaming in the following day. Speed is essential if design research is going to keep pace with a busy product release schedule. Surveys are also an invaluable springboard for customer interviews, which provide deep perspectives on user behavior. If you play your cards right as you construct your survey, you can capture a user ID and an email address for each respondent, making it easy to get in touch with customers whose feedback is particularly intriguing. No more recruiting customers for your research via Twitter or through a recruiting company charging a small fortune. You can filter survey responses and isolate the exact customers to talk with in moments, not months. I love this connected process of sending targeted surveys, filtering the results, and then — with surgical precision — selecting just the right customers to interview. Not only is it fast and cheap, but it lets design researchers do quantitative and qualitative research in a coordinated way. Aggregate survey responses help you quantify the perspectives of different user segments, and interviews help you get into the heads of your customers. An annual survey can give your team the data needed to make more informed designs in the new year. It all starts with a plan. Planning your survey Before you start jotting down questions to ask users, spend some time thinking about the work your team will be doing in the coming year. Are you planning new mobile apps or a responsive redesign? Then questions about devices used and behaviors around mobile devices might be in order. Rethinking your content strategy? Then you might want to ask a few questions about how your customers consume content. You can’t predict all of the projects you’ll be working on in the coming year, but tuck a couple of sections in your survey about the projects you’re certain about. This will give you the research you need to start new projects with solid foundational data. Google Drive is a great place to start collaboratively building survey questions with colleagues. Questions that seem crystal clear in your head get challenged, refined, or even expanded quickly when the entire team can chime in. As you craft your survey, try to consider how you’ll filter it once all of the data is compiled. Do you need to see responses by industry, by age of an account, by devices used, or by size of company? Adding the right filter questions can help you discover fascinating patterns in user segments. Filtering on responses to a few questions can surface insights like: customers in non-profit companies with more than 100 employees are 17% more likely to use an Android phone and are most attracted to features A, D, and F. A designer working on the landing page for a non-profit would love to have concrete information like this. Filter questions are key, so consider them carefully. But don’t go overboard — too many of them and you’ll start to hurt your survey response rate. Multiple choice questions are the heart of most surveys because respondents can complete them quickly, which increases response rate, and researchers can analyze them without a lot of manual categorization. Open text field questions are valuable too, but be careful not to add too many to your survey. You’ll hate yourself after the survey’s done and you have to sort through and tag thousands of open responses so patterns become visible. Oy vey! An open-ended question works well towards the end of the survey. At this point respondents have a lot of topics swirling around in their head and tend to say weird things that will pique your interest. This is where you’ll find the outliers who are using your product. They’ll be fascinating to interview, and on occasion will help you see your work in a brand new way. Conclude your survey with a question asking permission to get in touch for a followup interview so you don’t pester people who want to be left alone. With your questions nailed down, it’s time to build out that survey and get it ready for sending! Building your survey There are dozens of apps you could use to build your survey, but SurveyMonkey is the one that I prefer. It lets you pass in variables for each respondent such as user ID and email address. Metadata about respondents is essential if you’re going to do any follow-up interviews with your customers in the coming year. SurveyMonkey also makes it easy to set up question logic, showing questions to customers only if they responded in a certain way to a prior question. This helps you avoid asking irrelevant questions to some respondents. Determining survey recipients Once you’ve chosen a survey tool and entered all of your questions, you need to gather a list of recipients. Your first instinct will be to send it to everyone. You might say, “I need maximum response and metric shit tons of data!” But this is rarely the best approach — broad distribution almost always leads to lower response rates, increased noise, and decreased signal in your data. Are there subsets of customers you could send to, like only those who are active, those who are paying, or have been with you for a certain length of time? Talk to the keepers of your customer database and see how they can segment it so you can be certain you’re talking to just the people who will have the most relevant responses for your needs. If you want to get super nerdy when finding the right customer sample to survey, use a [sample size calculator]. Sampling is a deep subject best explored in other articles. Crafting your survey email After focusing your energies on writing and building your survey, the email asking your customers to respond seems almost trivial, but it will greatly influence your response rate. Take great care when writing your subject line and the body of the email. If you can pull it off, A/B testing subject lines can greatly improve the open rate of your email and click-through to your survey. My design research team has seen a ~10% increase in open and click rates when we A/B tested. We’ve found that personalizing subject lines and greetings with the recipients name (ie. “Hey, Aarron. How can we make our app work better for you?”) gave us the best response rates. Your mileage may vary. The tone of your email is important — be friendly, honest, and to the point. Those that are passionate about your product will be happy to share their perspective. Writing a survey email that people will actually respond to ain’t easy — in fact, they’re almost always annoying. But Ben Chestnut found a non-annoying way to send a survey email and improve response rates. The email sent for the 2013 MailChimp survey let customers know what we’d been up to in the previous year, and invited feedback on what we should work on in the coming year. The link to your survey should be a clear call to action. A big button with a label like “Answer a few questions” generally does the trick. The URL linking to the survey will need to include some variables like user ID and email. It might look something like this if you’re using SurveyMonkey: http://surveymonkey.com/s/somesurveyid/?uid=*|UID|*&email=*|email|* As each email is sent, the proper data will be populated in the variables, passing it on to the survey app for inclusion in each response. This is the magic that will help you pinpoint customers to interview down the road, so take special care to test that all is working before sending to all recipients. How you construct the survey link will vary depending on what survey tool and email service provider you use, so don’t take my example as gospel. You’ll need to read the documentation for your survey and email apps to set things up properly. Pilot before sending By now, you’ve whipped yourself into a fever pitch over your brilliant survey and the data you hope to collect. Your finger is on the send button, poised for action, but there’s one very important thing to do before you send to the entire list of customers: send a pilot email. How do you know if your questions are clear, your form logic is sound, and you’re passing variables from the email to the survey properly? You won’t, unless you send to a small segment of your recipients first. The data collected in your pilot will make plain where your survey needs refinement. This data won’t be used in your final analysis, as you’re probably going to make a few changes to your questions. Send the pilot survey to enough people that you can really stress test the clarity of the questions and data you’re gathering, while considering how much data can you comfortably throw out. If you’re sending your final survey to a few thousand people, you might find a couple of hundred recipients for your pilot will give you enough insight into what to improve while leaving the vast majority of the recipients for your final survey. After you’ve sent your pilot, made your survey adjustments, and ensured the variables are being passed from your email into the survey app, you’re ready to send to the remainder of your customers. This is your moment of glory! Analyzing your results After a couple of weeks you can probably safely close the survey so no other responses come in as you transition from data gathering to data analysis. Any survey app worth its salt will chart responses to your multiple choice questions. Reviewing these charts is a great place to start your analysis. Is there anything particularly interesting that stands out? Jot down some of your observations. I like to print screenshots of the charts for each question, highlighting areas of interest. These prints become a particularly handy reference point for the next step in your analysis. Printing results from a survey makes comparing different customers easy. Viewing aggregate data about all responses is interesting, but the deltas between different types of customers are where the real revelations happen. Remember those filter questions you added to your survey? They’re the tool that’ll help you compare customer segments. Most survey apps will let you filter the data based on response to a question. If the one you’re using doesn’t, you can always export your data and create pivot tables in Excel. Try filtering your data based on one of your filter questions, such as industry, company size, or devices used. Now compare those printed screenshots of baseline responses to the filtered data. Chances are you’ll see some significant differences in how each group responded to your questions, giving you clues about the variance in interests and motivations in customer segments and a leg up as you work on future design projects. Open-ended responses are equally interesting, but much more time-consuming to analyze. Yes, you need to read through thousands of responses, some of which are constructive and some of which are not. Taking the time to tag each open response will help you see trends and filter out the responses that are unhelpful. Unlike questions with predefined answers, open-ended responses let users express unique ideas and use cases you may not be looking for. The tedium of reading thousands of response is always cut by eureka moments when users tell you something fascinating that changes your perspective on your app. These are the folks you want to pull out for follow-up interviews. Because you’ve already captured their email addresses when you set up your survey and your email, getting in touch will be a piece of cake. Filter, compare, interview, and summarize; then share your findings with your colleagues. Reports are great for head honchos, but if you want to really inform and inspire, create a video, a poster series, or even a comic to communicate what you’ve learned. Want to get really fancy? Store your survey results in a centrally accessible location so anyone in your company can research and discover the insights they need to make more informed designs. Good design researchers discover valuable insights. Great design researchers turn those insights into stories. Conclusion As we enter the new year, it’s a great time to reflect on the work we’ve done in the past and how we can do better in the future. Without a doubt, designers working with a foundation of insights about customers can make more effective UIs. But designers aren’t the only ones who stand to gain from the data collected in an annual survey—anyone who makes things for or communicates with customers will find themselves empowered to do better work when they know more about the people they serve. The data you collect with your survey is a fantastic holiday gift to your colleagues, one that they’ll appreciate throughout the year. 2013 Aarron Walter aarronwalter 2013-12-13T00:00:00+00:00 https://24ways.org/2013/data-driven-design-with-an-annual-survey/ design
193 Web Content Accessibility Guidelines—for People Who Haven't Read Them I’ve been a huge fan of the Web Content Accessibility Guidelines 2.0 since the World Wide Web Consortium (W3C) published them, nine years ago. I’ve found them practical and future-proof, and I’ve found that they can save a huge amount of time for designers and developers. You can apply them to anything that you can open in a browser. My favourite part is when I use the guidelines to make a website accessible, and then attend user-testing and see someone with a disability easily using that website. Today, the United Nations International Day of Persons with Disabilities, seems like a good time to re-read Laura Kalbag’s explanation of why we should bother with accessibility. That should motivate you to devour this article. If you haven’t read the Web Content Accessibility Guidelines 2.0, you might find them a bit off-putting at first. The editors needed to create a single standard that countries around the world could refer to in legislation, and so some of the language in the guidelines reads like legalese. The editors also needed to future-proof the guidelines, and so some terminology—such as “time-based media” and “programmatically determined”—can sound ambiguous. The guidelines can seem lengthy, too: printing the guidelines, the Understanding WCAG 2.0 document, and the Techniques for WCAG 2.0 document would take 1,200 printed pages. This festive season, let’s rip off that legalese and ambiguous terminology like wrapping paper, and see—in a single article—what gifts the Web Content Accessibility Guidelines 2.0 editors have bestowed upon us. Can your users perceive the information on your website? The first guideline has criteria that help you prevent your users from asking “What the **** is this thing here supposed to be?” 1.1.1 Text is the most accessible format for information. Screen readers—such as the “VoiceOver” setting on your iPhone or the “TalkBack” app on your Android phone—understand text better than any other format. The same applies for other assistive technology, such as translation apps and Braille displays. So, if you have anything on your webpage that’s not text, you must add some text that gives your user the same information. You probably know how to do this already; for example: for images in webpages, put some alternative text in an alt attribute to tell your user what the image conveys to the user; for photos in tweets, add a description to make the images accessible; for Instagram posts, write a caption that conveys the photo’s information. The alternative text should allow the user to get the same information as someone who can see the image. For websites that have too many images for someone to add alternative text to, consider how machine learning and Dynamically Generated Alt Text might—might—be appropriate. You can probably think of a few exceptions where providing text to describe an image might not make sense. Remember I described these guidelines as “practical”? They cover all those exceptions: User interface controls such as buttons and text inputs must have names or labels to tell your user what they do. If your webpage has video or audio (more about these later on!), you must—at least—have text to tell the user what they are. Maybe your webpage has a test where your user has to answer a question about an image or some audio, and alternative text would give away the answer. In that case, just describe the test in text so your users know what it is. If your webpage features a work of art, tell your user the experience it evokes. If you have to include a Captcha on your webpage—and please avoid Captchas if at all possible, because some users cannot get past them—you must include text to tell your user what it is, and make sure that it doesn’t rely on only one sense, such as vision. If you’ve included something just as decoration, you must make sure that your user’s assistive technology can ignore it. Again, you probably know how to do this. For example, you could use CSS instead of HTML to include decorative images, or you could add an empty alt attribute to the img element. (Please avoid that recent trend where developers add empty alt attributes to all images in a webpage just to make the HTML validate. You’re better than that.) (Notice that the guidelines allow you to choose how to conform to them, with whatever technology you choose. To make your website conform to a guideline, you can either choose one of the techniques for WCAG 2.0 for that guideline or come up with your own. Choosing a tried-and-tested technique usually saves time!) 1.2.1 If your website includes a podcast episode, speech, lecture, or any other recorded audio without video, you must include a transcription or some other text to give your user the same information. In a lot of cases, you might find this easier than you expect: professional transcription services can prove relatively inexpensive and fast, and sometimes a speaker or lecturer can provide the speech or lecture notes that they read out word-for-word. Just make sure that all your users can get the same information and the same results, whether they can hear the audio or not. For example, David Smith and Marco Arment always publish episode transcripts for their Under the Radar podcast. Similarly, if your website includes recorded video without audio—such as an animation or a promotional video—you must either use text to detail what happens in the video or include an audio version. Again, this might work out easier then you perhaps fear: for example, you could check to see whether the animation started life as a list of instructions, or whether the promotional video conveys the same information as the “About Us” webpage. You want to make sure that all your users can get the same information and the same results, whether they can see that video or not. 1.2.2 If your website includes recorded videos with audio, you must add captions to those videos for users who can’t hear the audio. Professional transcription services can provide you with time-stamped text in caption formats that YouTube supports, such as .srt and .sbv. You can upload those to YouTube, so captions appear on your videos there. YouTube can auto-generate captions, but the quality varies from impressively accurate to comically inaccurate. If you have a text version of what the people in the video said—such as the speech that a politician read or the bedtime story that an actor read—you can create a transcript file in .txt format, without timestamps. YouTube then creates captions for your video by synchronising that text to the audio in the video. If you host your own videos, you can ask a professional transcription service to give you .vtt files that you can add to a video element’s track element—or you can handcraft your own. (A quick aside: if your website has more videos than you can caption in a reasonable amount of time, prioritise the most popular videos, the most important videos, and the videos most relevant to people with disabilities. Then make sure your users know how to ask you to caption other videos as they encounter them.) 1.2.3 If your website has recorded videos that have audio, you must add an “audio description” narration to the video to describe important visual details, or add text to the webpage to detail what happens in the video for users who cannot see the videos. (I like to add audio files from videos to my Huffduffer account so that I can listen to them while commuting.) Maybe your home page has a video where someone says, “I’d like to explain our new TPS reports” while “Bill Lumbergh, division Vice President of Initech” appears on the bottom of the screen. In that case, you should add an audio description to the video that announces “Bill Lumbergh, division Vice President of Initech”, just before Bill starts speaking. As always, you can make life easier for yourself by considering all of your users, before the event: in this example, you could ask the speaker to begin by saying, “I’m Bill Lumbergh, division Vice President of Initech, and I’d like to explain our new TPS reports”—so you won’t need to spend time adding an audio description afterwards. 1.2.4 If your website has live videos that have some audio, you should get a stenographer to provide real-time captions that you can include with the video. I’ll be honest: this can prove tricky nowadays. The Web Content Accessibility Guidelines 2.0 predate YouTube Live, Instagram live Stories, Periscope, and other such services. If your organisation creates a lot of live videos, you might not have enough resources to provide real-time captions for each one. In that case, if you know the contents of the audio beforehand, publish the contents during the live video—or failing that, publish a transcription as soon as possible. 1.2.5 Remember what I said about the recorded videos that have audio? If you can choose to either add an audio description or add text to the webpage to detail what happens in the video, you should go with the audio description. 1.2.6 If your website has recorded videos that include audio information, you could provide a sign language version of the audio information; some people understand sign language better than written language. (You don’t need to caption a video of a sign language version of audio information.) 1.2.7 If your website has recorded videos that have audio, and you need to add an audio description, but the audio doesn’t have enough pauses for you to add an “audio description” narration, you could provide a separate version of that video where you have added pauses to fit the audio description into. 1.2.8 Let’s go back to the recorded videos that have audio once more! You could add text to the webpage to detail what happens in the video, so that people who can neither read captions nor hear dialogue and audio description can use braille displays to understand your video. 1.2.9 If your website has live audio, you could get a stenographer to provide real-time captions. Again, if you know the contents of the audio beforehand, publish the contents during the live audio or publish a transcription as soon as possible. (Congratulations on making it this far! I know that seems like a lot to remember, but keep in mind that we’ve covered a complex area: helping your users to understand multimedia information that they can’t see and/or hear. Grab a mince pie to celebrate, and let’s keep going.) 1.3.1 You must mark up your website’s content so that your user’s browser, and any assistive technology they use, can understand the hierarchy of the information and how each piece of information relates to the rest. Once again, you probably know how to do this: use the most appropriate HTML element for each piece of information. Mark up headings, lists, buttons, radio buttons, checkboxes, and links with the most appropriate HTML element. If you’re looking for something to do to keep you busy this Christmas, scroll through the list of the elements of HTML. Do you notice any elements that you didn’t know, or that you’ve never used? Do you notice any elements that you could use on your current projects, to mark up the content more accurately? Also, revise HTML table advanced features and accessibility, how to structure an HTML form, and how to use the native form widgets—you might be surprised at how much you can do with just HTML! Once you’ve mastered those, you can make your website much more usable for your all of your users. 1.3.2 If your webpage includes information that your user has to read in a certain order, you must make sure that their browser and assistive technology can present the information in that order. Don’t rely on CSS or whitespace to create that order visually. Check that the order of the information makes sense when CSS and whitespace aren’t formatting it. Also, try using the Tab key to move the focus through the links and form widgets on your webpage. Does the focus go where you expect it to? Keep this in mind when using order in CSS Grid or Flexbox. 1.3.3 You must not presume that your users can identify sensory characteristics of things on your webpage. Some users can’t tell what you’ve positioned where on the screen. For example, instead of asking your users to “Choose one of the options on the left”, you could ask them to “Choose one of our new products” and link to that section of the webpage. 1.4.1 You must not rely on colour as the only way to convey something to your users. Some of your users can’t see, and some of your users can’t distinguish between colours. For example, if your webpage uses green to highlight the products that your shop has in stock, you could add some text to identify those products, or you could group them under a sub-heading. 1.4.2 If your webpage automatically plays a sound for more than 3 seconds, you must make sure your users can stop the sound or change its volume. Don’t rely on your user turning down the volume on their computer; some users need to hear the screen reader on their computer, and some users just want to keep listening to whatever they were listening before your webpage interrupted them! 1.4.3 You should make sure that your text contrasts enough with its background, so that your users can read it. Bookmark Lea Verou’s Contrast Ratio calculator now. You can enter the text colour and background colour as named colours, or as RGB, RGBa, HSL, or HSLa values. You should make sure that: normal text that set at 24px or larger has a ratio of at least 3:1; bold text that set at 18.75px or larger has a ratio of at least 3:1; all other text has a ratio of at least 4½:1. You don’t have to do this for disabled form controls, decorative stuff, or logos—but you could! 1.4.4 You should make sure your users can resize the text on your website up to 200% without using their assistive technology—and still access all your content and functionality. You don’t have to do this for subtitles or images of text. 1.4.5 You should avoid using images of text and just use text instead. In 1998, Jeffrey Veen’s first Hot Design Tip said, “Text is text. Graphics are graphics. Don’t confuse them.” Now that you can apply powerful CSS text-styling properties, use CSS Grid to precisely position text, and choose from thousands of web fonts (Jeffrey co-founded Typekit to help with this), you pretty much never need to use images of text. The guidelines say you can use images of text if you let your users specify the font, size, colour, and background of the text in the image of text—but I’ve never seen that on a real website. Also, this doesn’t apply to logos. 1.4.6 Let’s go back to colour contrast for a second. You could make your text contrast even more with its background, so that even more of your users can read it. To do that, use Lea Verou’s Contrast Ratio calculator to make sure that: normal text that is 24px or larger has a ratio of at least 4½:1; bold text that 18.75px or larger has a ratio of at least 4½:1; all other text has a ratio of at least 7:1. 1.4.7 If your website has recorded speech, you could make sure there are no background sounds, or that your users can turn off any background sounds. If that’s not possible, you could make sure that any background sounds that last longer than a couple of seconds are at least four times quieter than the speech. This doesn’t apply to audio Captchas, audio logos, singing, or rapping. (Yes, these guidelines mention rapping!) 1.4.8 You could make sure that your users can reformat blocks of text on your website so they can read them better. To do this, make sure that your users can: specify the colours of the text and the background, and make the blocks of text less than 80-characters wide, and align text to the left (or right for right-to-left languages), and set the line height to 150%, and set the vertical distance between paragraphs to 1½ times the line height of the text, and resize the text (without using their assistive technology) up to 200% and still not have to scroll horizontally to read it. By the way, when you specify a colour for text, always specify a colour for its background too. Don’t rely on default background colours! 1.4.9 Let’s return to images of text for a second. You could make sure that you use them only for decoration and logos. Can users operate the controls and links on your website? The second guideline has criteria that help you prevent your users from asking, “How the **** does this thing work?” 2.1.1 You must make sure that you users can carry out all of your website’s activities with just their keyboard, without time limits for pressing keys. (This doesn’t apply to drawing or anything else that requires a pointing device such as a mouse.) Again, if you use the most appropriate HTML element for each piece of information and for each form element, this should prove easy. 2.1.2 You must make sure that when the user uses the keyboard to focus on some part of your website, they can then move the focus to some other part of your webpage without needing to use a mouse or touch the screen. If your website needs them to do something complex before they can move the focus elsewhere, explain that to your user. These “keyboard traps” have become rare, but beware of forms that move focus from one text box to another as soon as they receive the correct number of characters. 2.1.3 Let’s revisit making sure that you users can carry out all of your website’s activities with just their keyboard, without time limits for pressing keys. You could make sure that your user can do absolutely everything on your website with just the keyboard. 2.2.1 Sometimes people need more time than you might expect to complete a task on your website. If any part of your website imposes a time limit on a task, you must do at least one of these: let your users turn off the time limit before they encounter it; or let your users increase the time limit to at least 10 times the default time limit before they encounter it; or warn your users before the time limit expires and give them at least 20 seconds to extend it, and let them extend it at least 10 times. Remember: these guidelines are practical. They allow you to enforce time limits for real-time events such as auctions and ticket sales, where increasing or extending time limits wouldn’t make sense. Also, the guidelines allow you to enforce a maximum time limit of 20 hours. The editors chose 20 hours because people need to go to sleep at some stage. See? Practical! 2.2.2 In my experience, this criterion remains the least well-known—even though some users can only use websites that conform to it. If your website presents content alongside other content that can distract users by automatically moving, blinking, scrolling, or updating, you must make sure that your users can: pause, stop, or hide the other content if it’s not essential and lasts more than 5 seconds; and pause, stop, hide, or control the frequency of the other content if it automatically updates. It’s OK if your users miss live information such as stock price updates or football scores; you can’t do anything about that! Also, this doesn’t apply to animations such as progress bars that you put on a website to let all users know that the webpage isn’t frozen. (If this one sounds complex, just add a pause button to anything that might distract your users.) 2.2.3 Let’s go back to time limits on tasks on your website. You could make your website even easier to use by removing all time limits except those on real-time events such as auctions and ticket sales. That would mean your user wouldn’t need to interact with a timer at all. 2.2.4 You could let your users turn off all interruptions—server updates, promotions, and so on—apart from any emergency information. 2.2.5 This is possibly my favourite of these criteria! After your website logs your user out, you could make sure that when they log in again, they can continue from where they were without having lost any information. Do that, and you’ll be on everyone’s Nice List this Christmas. 2.3.1 You must make sure that nothing flashes more than three times a second on your website, unless you can make sure that the flashes remain below the acceptable general flash and red flash thresholds… 2.3.2 …or you could just make sure that nothing flashes more than three times per second on your website. This is usually an easier goal. 2.4.1 You must make sure that your users can jump past any blocks of content, such as navigation menus, that are repeated throughout your website. You know the drill here: using HTML’s sectioning elements such as header, nav, main, aside, and footer allows users with assistive technology to go straight to the content they need, and adding “Skip Navigation” links allows everyone to get to your main content faster. 2.4.2 You must add a proper title to describe each webpage’s topic. Your webpage won’t even validate without a title element, so make it a useful one. 2.4.3 If your users can focus on links and native form widgets, you must make sure that they can focus on elements in an order that makes sense. 2.4.4 You must make sure that your users can understand the purpose of a link when they read: the text of the link; or the text of the paragraph, list item, table cell, or table header for the cell that contains the link; or the heading above the link. You don’t have to do that for games and quizzes. 2.4.5 You should give your users multiple ways to find any webpage within a set of webpages. Add site-wide search and a site map and you’re done! This doesn’t apply for a webpage that is part of a series of actions (like a shopping cart and checkout flow) or to a webpage that is a result of a series of actions (like a webpage confirming that the user has bought what was in the shopping cart). 2.4.6 You should help your users to understand your content by providing: headings that describe the topics of you content; labels that describe the purpose of the native form widgets on the webpage. 2.4.7 You should make sure that users can see which element they have focussed on. Next time you use your website, try hitting the Tab key repeatedly. Does it visually highlight each item as it moves focus to it? If it doesn’t, search your CSS to see whether you’ve applied outline: 0; to all elements—that’s usually the culprit. Use the :focus pseudo-element to define how elements should appear when they have focus. 2.4.8 You could help your user to understand where the current webpage is located within your website. Add “breadcrumb navigation” and/or a site map and you’re done. 2.4.9 You could make links even easier to understand, by making sure that your users can understand the purpose of a link when they read the text of the link. Again, you don’t have to do that for games and quizzes. 2.4.10 You could use headings to organise your content by topic. Can users understand your content? The third guideline has criteria that help you prevent your users from asking, “What the **** does this mean?” 3.1.1 Let’s start this section with the criterion that possibly takes the least time to implement; you must make sure that the user’s browser can identify the main language that your webpage’s content is written in. For a webpage that has mainly English content, use <html lang="en">. 3.1.2 You must specify when content in another language appears in your webpage, like so: <q>I wish you a <span lang="fr">Joyeux Noël</span>.</q>. You don’t have to do this for proper names, technical terms, or words that you can’t identify a language for. You also don’t have to do it for words from a different language that people consider part of the language around those words; for example, <q>Come to our Christmas rendezvous!</q> is OK. 3.1.3 You could make sure that your users can find out the meaning of any unusual words or phrases, including idioms like “stocking filler” or “Bah! Humbug!” and jargon such as “VoiceOver” and “TalkBack”. Provide a glossary or link to a dictionary. 3.1.4 You could make sure that your users can find out the meaning of any abbreviation. For example, VoiceOver pronounces “Xmas” as “Smas” instead of “Christmas”. Using the abbr element and linking to a glossary can help. (Interestingly, VoiceOver pronounces “abbr” as “abbreviation”!) 3.1.5 Do your users need to be able to read better than a typically educated nine-year-old, to read your content (apart from proper names and titles)? If so, you could provide a version that doesn’t require that level of reading ability, or you could provide images, videos, or audio to explain your content. (You don’t have to add captions or audio description to those videos.) 3.1.6 You could make sure that your users can access the pronunciation of any word in your content if that word’s meaning depends on its pronunciation. For example, the word “close” could have one of two meanings, depending on pronunciation, in a phrase such as, “Ready for Christmas? Close now!” 3.2.1 Some users need to focus on elements to access information about them. You must make sure that focusing on an element doesn’t trigger any major changes, such as opening a new window, focusing on another element, or submitting a form. 3.2.2 Webpages are easier for users when the controls do what they’re supposed to do. Unless you have warned your users about it, you must make sure that changing the value of a control such as a text box, checkbox, or drop-down list doesn’t trigger any major changes, such as opening a new window, focusing on another element, or submitting a form. 3.2.3 To help your users to find the content they want on each webpage, you should put your navigation elements in the same place on each webpage. (This doesn’t apply when your user has changed their preferences or when they use assistive technology to change how your content appears.) 3.2.4 When a set of webpages includes things that have the same functionality on different webpages, you should name those things consistently. For example, don’t use the word “Search” for the search box on one webpage and “Find” for the search box on another webpage within that set of webpages. 3.2.5 Let’s go back to major changes, such as a new window opening, another element taking focus, or a form being submitted. You could make sure that they only happen when users deliberately make them happen, or when you have warned users about them first. For example, you could give the user a button for updating some content instead of automatically updating that content. Also, if a link will open in a new window, you could add the words “opens in new window” to the link text. 3.3.1 Users make mistakes when filling in forms. Your website must identify each mistake to your user, and must describe the mistake to your users in text so that the user can fix it. One way to identify mistakes reliably to your users is to set the aria-invalid attribute to true in the element that has a mistake. That makes sure that users with assistive technology will be alerted about the mistake. Of course, you can then use the [aria-invalid="true"] attribute selector in your CSS to visually highlight any such mistakes. Also, look into how certain attributes of the input element such as required, type, and list can help prevent and highlight mistakes. 3.3.2 You must include labels or instructions (and possibly examples) in your website’s forms, to help your users to avoid making mistakes. 3.3.3 When your user makes a mistake when filling in a form, your webpage should suggest ways to fix that mistake, if possible. This doesn’t apply in scenarios where those suggestions could affect the security of the content. 3.3.4 Whenever your user submits information that: has legal or financial consequences; or affects information that they have previously saved in your website; or is part of a test …you should make sure that they can: undo it; or correct any mistakes, after your webpage checks their information; or review, confirm, and correct the information before they finally submit it. 3.3.5 You could help prevent your users from making mistakes by providing obvious, specific help, such as examples, animations, spell-checking, or extra instructions. 3.3.6 Whenever your user submits any information, you could make sure that they can: undo it; or correct any mistakes, after your webpage checks their information; or review, confirm, and correct the information before they finally submit it. Have you made your website robust enough to work on your users’ browsers and assistive technologies? The fourth and final guideline has criteria that help you prevent your users from asking, “Why the **** doesn’t this work on my device?” 4.1.1 You must make sure that your website works as well as possible with current and future browsers and assistive technology. Prioritise complying with web standards instead of relying on the capabilities of currently popular devices and browsers. Web developers didn’t expect their users to be unwrapping the Wii U Browser five years ago—who knows what browsers and assistive technologies our users will be unwrapping in five years’ time? Avoid hacks, and use the W3C Markup Validation Service to make sure that your HTML has no errors. 4.1.2 If you develop your own user interface components, you must make their name, role, state, properties, and values available to your user’s browsers and assistive technologies. That should make them almost as accessible as standard HTML elements such as links, buttons, and checkboxes. “…and a partridge in a pear tree!” …as that very long Christmas song goes. We’ve covered a lot in this article—because your users have a lot of different levels of ability. Hopefully this has demystified the Web Content Accessibility Guidelines 2.0 for you. Hopefully you spotted a few situations that could arise for users on your website, and you now know how to tackle them. To start applying what we’ve covered, you might like to look at Sarah Horton and Whitney Quesenbery’s personas for Accessible UX. Discuss the personas, get into their heads, and think about which aspects of your website might cause problems for them. See if you can apply what we’ve covered today, to help users like them to do what they need to do on your website. How to know when your website is perfectly accessible for everyone LOL! There will never be a time when your website becomes perfectly accessible for everyone. Don’t aim for that. Instead, aim for regularly testing and making your website more accessible. Web Content Accessibility Guidelines (WCAG) 2.1 The W3C hope to release the Web Content Accessibility Guidelines (WCAG) 2.1 as a “recommendation” (that’s what the W3C call something that we should start using) by the middle of next year. Ten years may seem like a long time to move from version 2.0 to version 2.1, but consider the scale of the task: the editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible. Keep an eye out for 2.1! You’ll go down in history One last point: I’ve met a surprising number of web designers and developers who do great work to make their websites more accessible without ever telling their users about it. Some of your potential customers have possibly tried and failed to use your website in the past. They probably won’t try again unless you let them know that things have improved. A quick Twitter search for your website’s name alongside phrases like “assistive technology”, “doesn’t work”, or “#fail” can let you find frustrated users—so you can tell them about how you’re making your website more accessible. Start making your websites work better for everyone—and please, let everyone know. 2017 Alan Dalton alandalton 2017-12-03T00:00:00+00:00 https://24ways.org/2017/wcag-for-people-who-havent-read-them/ code
245 Web Content Accessibility Guidelines 2.1—for People Who Haven’t Read the Update Happy United Nations International Day of Persons with Disabilities 2018! The United Nations chose “Empowering persons with disabilities and ensuring inclusiveness and equality” as this year’s theme. We’ve seen great examples of that in 2018; for example, Paul Robert Lloyd has detailed how he improved the accessibility of this very website. On social media, US Congressmember-Elect Alexandria Ocasio-Cortez started using the Clipomatic app to add live captions to her Instagram live stories, conforming to success criterion 1.2.4, “Captions (Live)” of the Web Content Accessibility Guidelines (figure 1) …and British Vogue Contributing Editor Sinéad Burke has used the split-screen feature of Instagram live stories to invite an interpreter to provide live Sign Language interpretation, going above and beyond success criterion 1.2.6, “Sign Language (Prerecorded)” of the Web Content Accessibility Guidelines (figure 2). Figure 1: Screenshot of Alexandria Ocasio-Cortez’s Instagram story with live captionsFigure 2: Screenshot of Sinéad Burke’s Instagram story with Sign Language Interpretation That theme chimes with this year’s publication of the World Wide Web Consortium (W3C)’s Web Content Accessibility Guidelines (WCAG) 2.1. In last year’s “Web Content Accessibility Guidelines—for People Who Haven’t Read Them”, I mentioned the scale of the project to produce this update during 2018: “the editors have to update the guidelines to cover all the new ways that people interact with new technologies, while keeping the guidelines backwards-compatible”. The WCAG working group have added 17 success criteria to the 61 that they released way back in 2008—for context, that was 1½ years before Apple released their first iPad! These new criteria make it easier than ever for us web geeks to produce work that is more accessible to people using mobile devices and touchscreens, people with low vision, and people with cognitive and learning disabilities. Once again, let’s rip off all the legalese and ambiguous terminology like wrapping paper, and get up to date. Can your users perceive the information on your website? The first guideline has criteria that help you prevent your users from asking, “What the **** is this thing here supposed to be?” We’ve seven new criteria for this guideline. 1.3.4 Some people can’t easily change the orientation of the device that they use to browse the web, and so you should make sure that your users can use your website in portrait orientation and in landscape orientation. Consider how people slowly twirl presents that they have plucked from under the Christmas tree, to find the appropriate orientation—and expect your users to do likewise with your websites and apps. We’ve had 18½ years since John Allsopp’s revelatory Dao of Web Design enlightened us to “embrace the fact that the web doesn’t have the same constraints” as printed pages, and to “design for this flexibility”. So, even though this guideline doesn’t apply to websites where “a specific display orientation is essential,” such as a piano tutorial, always ask yourself, “What would John Allsopp do?” 1.3.5 You should help the user’s browser to automatically complete–or not complete–form fields, to save the user some time and effort. The surprisingly powerful and flexible autocomplete attribute for input elements should prove most useful here. If you’ve used microformats or microdata to mark up information about a person, the autocomplete attribute’s range of values should seem familiar. I like how the W3’s “Using HTML 5.2 autocomplete attributes” says that autocompleted values in forms help “those with dexterity disabilities who have trouble typing, those who may need more time, and anyone who wishes to reduce effort to fill out a form” (emphasis mine). Um…🙋‍♂️ 1.3.6 I like this one a lot, because it can help a huge audience to overcome difficulties that might prevent them from ever using the web. Some people have cognitive difficulties that affect their memory, focus, attention, language processing, and/or decision-making. Those users often rely on assistive technologies that present information through proprietary symbols, summaries of content, and keyboard shortcuts. You could use ARIA landmarks to identify the regions of each webpage. You could also keep an eye on the W3C’s ongoing work on Personalisation Semantics. 1.4.10 If you were to find a Nintendo Switch and “Super Mario Odyssey” under your Christmas tree, you would have many hours of enjoyably scrolling horizontally and vertically to play the game. On the other hand, if you had to zoom a webpage to 400% so that you could read the content, you might have many hours of frustratedly scrolling horizontally and vertically to read the content. Learned reader, I assume you understand the purpose and the core techniques of Responsive Web Design. I also assume you’re getting up to speed with the new Grid, Flexbox, and Box Alignment techniques for layout, and overflow-wrap. Using those skills, you should make sure that all content and functionality remain available when the browser is 320px wide, without your user needing to scroll horizontally. (For vertical text, you should make sure that all content and functionality remain available when the browser is 256px high, without your user needing to scroll vertically.) You don’t have to do this for anything that would lose meaning if you restructured it into one narrow column. That includes some images, maps, diagrams, video, games, presentations, and data tables. Remember to check how your media queries affect font size: your user might find that text becomes smaller as they zoom into the webpage. So, test this one on real devices, or—better yet—test it with real users. 1.4.11 In “Web Content Accessibility Guidelines—for People Who Haven’t Read Them”, I recommended bookmarking Lea Verou’s Contrast Ratio calculator for checking that text contrasts enough with its background (for success criteria 1.4.3 and 1.4.6), so that more people can read it more easily. For this update, you should make sure that form elements and their focus states have a 3:1 contrast ratio with the colour around them. This doesn’t apply to controls that use the browser’s default styling. Also, you should make sure that graphics that convey information have a 3:1 contrast ratio with the colour around them. 1.4.12 Some people, due to low vision or dyslexia, might need to modify the typography that you agonised over. Research indicates that you should make sure that all content and functionality would remain available if a user were to set: line height to at least 1½ × the font size; space below paragraphs to at least 2 × the font size; letter spacing to at least 0.12 × the font size; word spacing to at least 0.16 × the font size. To test this, check for text overlapping, text hiding behind other elements, or text disappearing. 1.4.13 Sometimes when visiting a website, you hover over—or tab on to—something that unleashes a newsletter subscription pop-up, some suggested “related content”, and/or a GDPR-related pop-up. On a well-designed website, you can press the Esc key on your keyboard or click a prominent “Close” button or “X” button to vanquish such intrusions. If the Esc key fails you, or if you either can’t see or can’t click the “Close” button…well, you’ll probably just close that browser tab. This situation can prove even more infuriating for users with low vision or cognitive disabilities. So, if new content appears when your user hovers over or tabs on to some element, you should make sure that: your user can dismiss that content without needing to move their pointer or tab on to some other element (this doesn’t apply to error warnings, or well-behaved content that doesn’t obscure or replace other content); the new content remains visible while your user moves their cursor over it; the new content remains visible as long as the user hovers over that element or dismisses that content—or until the new content is no longer valid. This doesn’t apply to situations such as hovering over an element’s title attribute, where the user’s browser controls the display of the content that appears. Can users operate the controls and links on your website? The second guideline has criteria that help you prevent your users from asking, “How the **** does this thing work?” We’ve nine new criteria for this guideline. 2.1.4 Some websites offer keyboard shortcuts for users. For example, the keyboard shortcuts for Gmail allow the user to press the ⇧ key and u to mark a message as unread. Usually, shortcuts on websites include modifier keys, such as Ctrl, along with a letter, number, or punctuation symbol. Unfortunately, users who have dexterity challenges sometimes trigger those shortcuts by accident, and that can make a website impossible to use. Also, speech input technology can sometimes trigger those shortcuts. If your website offers single-character keyboard shortcuts, you must allow your user to turn off or remap those shortcuts. This doesn’t apply to single-character keyboard shortcuts that only work when a control, such as drop-down list, has focus. 2.2.6 If your website uses a timeout for some process, you could store the user’s data for at least 20 hours, so that users with cognitive disabilities can take a break or take longer than usual to complete the process without losing their place or losing their data. Alternatively, you could warn the user, at the start of the process, about that the website will timeout after whatever amount of time you have chosen. 2.3.3 If your website has some non-essential animation (such as parallax scrolling) that starts when the user does some particular action, you could allow the user to turn off that animation so that you avoid harming users with vestibular disorders. The prefers-reduced-motion media query currently has limited browser support, but you can start using it now to avoid showing animations to users who select the “Reduce Motion” setting (or equivalent) in their device’s operating system: @media (prefers-reduced-motion: reduce) { .MrFancyPants { animation: none; } } 2.5.1 Some websites let users use multi-touch gestures on touchscreen devices. For example, Google Maps allows users to pinch with two fingers to zoom out and “unpinch” with two fingers to zoom in. Also, some websites allow users to drag a finger to do some action, such as changing the value on an input element with type="range", or swiping sideways to the next photograph in a gallery. Some users with dexterity challenges, and some users who use a head pointer, an eye-gaze system, or speech-controlled mouse emulation, might find multi-touch gestures or dragging impossible. You must make sure that your website supports single-tap alternatives to any multi-touch gestures or dragging actions that it provides. For example, if your website lets someone pinch and unpinch a map to zoom in and out, you must also provide buttons that a user can tap to zoom in and out. 2.5.2 This might be my favourite accessibility criterion ever! Did you ever touch or press a “Send” button but then immediately realise that you really didn’t want to send the message, and so move your finger or cursor away from the “Send” button before lifting your finger?! Imagine how many arguments that functionality has prevented. 😌 You must make sure that touching or pressing does not cause anything to happen before the user raises their finger or cursor, or make sure that the user can move their finger or cursor away to prevent the action. In JavaScript, prefer onclick to onmousedown, unless your website has actions that need onmousedown. Also, this doesn’t apply to actions that need to happen as soon as the user clicks or touches. For example, a user playing a “Whac-A-Mole” game or a piano emulator needs the action to happen as soon as they click or touch the screen. 2.5.3 Recently, entrepreneur and social media guru Gary Vaynerchuk has emphasised the rise of audio and voice as output and input. He quotes a Google statistic that says one in five search queries use voice input. Once again, users with disabilities have been ahead of the curve here, having used screen readers and/or dictation software for many years. You must make sure that the text that appears on a form control or image matches how your HTML identifies that form control or image. Use proper semantic HTML to achieve this: use the label element to pair text with the corresponding input element; use an alt attribute value that exactly matches any text that appears in an image; use an aria-labelledby attribute value that exactly matches the text that appears in any complex component. 2.5.4 Modern Web APIs allow web developers to specify how their website will react to the user shaking, tilting, or gesturing towards their device. Some users might find those actions difficult, impossible, or embarrassing to perform. If you make any functionality available when the user shakes, tilts, or gestures towards their device, you must provide form controls that make that same functionality available. As usual, this doesn’t apply to websites that require shaking, tilting, or gesturing; this includes some games and music programmes. John Gruber describes the iPhone’s “Shake to Undo” gesture as “dreadful — impossible to discover through exploration of the on-screen [user interface], bad for accessibility, and risks your phone flying out of your hand”. This accessibility criterion seems to empathise with John: you must make sure that your user can prevent your website from responding to shaking, tilting and/or gesturing towards their device. 2.5.5 Homer Simpson’s telephone famously complained, “The fingers you have used to dial are too fat.” I think we’ve all felt like that when using phones and tablets, particularly when trying to dismiss pop-ups and ads. You could make interactive elements at least 44px wide × 44px high. Apple’s “Human Interface Guidelines” agree: “Provide ample touch targets for interactive elements. Try to maintain a minimum tappable area of 44pt x 44pt for all controls.” This doesn’t apply to links within inline text, or to unsoiled elements. 2.5.6 Expect your users to use a variety of input devices they want, and to change from one to another whenever they please. For example, a user with a tablet and keyboard might jab icons on the screen while typing on the keyboard, or a user might dictate text while alone and then type on a keyboard when a colleague arrives. You could make sure that your website allows your users to use whichever available input modality they choose. Once again, this doesn’t apply to websites that require a specific modality; this includes typing tutors and music programmes. Can users understand your content? The third guideline has criteria that help you prevent your users from asking, “What the **** does this mean?” We’ve no new criteria for this guideline. Have you made your website robust enough to work on your users’ browsers and assistive technologies? The fourth and final guideline has criteria that help you prevent your users from asking, “Why the **** doesn’t this work on my device?” We’ve one new criterion for this guideline. 4.1.3 Sometimes you need to let your user know the status of something: “Did it work OK? What was the error? How far through it are we?” However, you should avoid making your user lose their place on the webpage, and so you should let them know the status without opening a new window, focusing on another element, or submitting a form. To do this properly for assistive technology users, choose the appropriate ARIA role for the new content; for example: if your user needs to know, “Did it work OK?”, add role="status”; if your user needs to know, “What was the error?”, add role="alert”; if you user needs to know, “How far through it are we?”, add role="log" (for a chat window) or role="progressbar" (for, well, a progress bar). Better design for humans My favourite of Luke Wroblewski’s collection of Design Quotes is, “Design is the art of gradually applying constraints until only one solution remains,” from that most prolific author, “Unknown”. I’ve always viewed the Web Content Accessibility Guidelines as people-based constraints, and liked how they help the design process. With these 17 new web content accessibility criteria, go forth and create solutions that more people than ever before can use. Spending those book vouchers you got for Christmas What next? If you’re looking for something to do to keep you busy this Christmas, I thoroughly recommend these four books for increasing your accessibility expertise: “Pro HTML5 Accessibility” by Joshue O Connor (Head of Accessibility (Interim) at the UK Government Digital Service, Director of InterAccess, and one of the editors of the Web Content Accessibility Guidelines 2.1): Although this book is six years old—a long time in web design—I find it an excellent go-to resource. It begins by explaining how people with disabilities use the web, and then expertly explains modern HTML in that context. “A Web for Everyone—Designing Accessible User Experiences” by Sarah Horton (the Paciello Group’s UX Strategy Lead) and Whitney Quesenbery (the Center for Civic Design’s co-director): This book covers the Web Content Accessibility Guidelines 2.0, the principles of Universal Design, and design thinking. Its personas for Accessible UX and its profiles of well-known industry figures—including some 24ways authors—keep its content practical and relevant throughout. “Accessibility For Everyone” by Laura Kalbag (Ind.ie’s co-founder and designer, and 24ways author): This book is just over a year old, and so serves as a great resource for up-to-date coverage of guidelines, laws, and accessibility features of operating systems—as well as content, design, coding, and testing. The audiobook, which Laura narrates, can help you and your colleagues go from having little or no understanding of web accessibility, to becoming familiar with all aspects of web accessibility—in less than four hours. “Just Ask: Integrating Accessibility Throughout Design” by Shawn Lawton Henry (the World Wide Web Consortium (W3C)’s Web Accessibility Initiative (WAI)’s Outreach Coordinator): Although this book is 11½ years old, the way it presents accessibility as part of the User-Centered Design process is timeless. I found its section on Usability Testing with people with disabilities particularly useful. 2018 Alan Dalton alandalton 2018-12-03T00:00:00+00:00 https://24ways.org/2018/wcag-for-people-who-havent-read-the-update/ ux
275 Context First: Web Strategy in Four Handy Ws Many, many years ago, before web design became my proper job, I trained and worked as a journalist. I studied publishing in London and spent three fun years learning how to take a few little nuggets of information and turn them into a story. I learned a bunch of stuff that has all been a huge help to my design career. Flatplanning, layout, typographic theory. All of these disciplines have since translated really well to web design, but without doubt the most useful thing I learned was how to ask difficult questions. Pretty much from day one of journalism school they hammer into you the importance of the Five Ws. Five disarmingly simple lines of enquiry that eloquently manage to provide the meat of any decent story. And with alliteration thrown in too. For a young journo, it’s almost too good to be true. Who? What? Where? When? Why? It seems so obvious to almost be trite but, fundamentally, any story that manages to answer those questions for the reader is doing a pretty good job. You’ll probably have noticed feeling underwhelmed by certain news pieces in the past – disappointed, like something was missing. Some irritating oversight that really lets the story down. No doubt it was one of the Ws – those innocuous little suckers are generally only noticeable by their absence, but they sure get missed when they’re not there. Question everything I’ve always been curious. An inveterate tinkerer with things and asker of dopey questions, often to the point of abject annoyance for anyone unfortunate enough to have ended up in my line of fire. So, naturally, the Five Ws started drifting into other areas of my life. I’d scrutinize everything, trying to justify or explain my rationale using these Ws, but I’d also find myself ripping apart the stuff that clearly couldn’t justify itself against the same criteria. So when I started working as a designer I applied the same logic and, sure enough, the Ws pretty much mapped to the exact same needs we had for gathering requirements at the start of a project. It seemed so obvious, such a simple way to establish the purpose of a product. What was it for? Why we were making it? And, of course, who were we making it for? It forced clients to stop and think, when really what they wanted was to get going and see something shiny. Sometimes that was a tricky conversation to have, but it’s no coincidence that those who got it also understood the value of strategy and went on to have good solid products, while those that didn’t often ended up with arrogantly insular and very shiny but ultimately unsatisfying and expendable products. Empty vessels make the most noise and all that… Content first I was both surprised and pleased when the whole content first idea started to rear its head a couple of years back. Pleased, because without doubt it’s absolutely the right way to work. And surprised, because personally it’s always been the way I’ve done it – I wasn’t aware there was even an alternative way. Content in some form or another is the whole reason we were making the things we were making. I can’t even imagine how you’d start figuring out what a site needs to do, how it should be structured, or how it should look without a really good idea of what that content might be. It baffles me still that this was somehow news to a lot of people. What on earth were they doing? Design without purpose is just folly, surely? It’s great to see the idea gaining momentum but, having watched it unfold, it occurred to me recently that although it’s fantastic to see a tangible shift in thinking – away from those bleak times, where making things up was somehow deemed an appropriate way to do things – there’s now a new bad guy in town. With any buzzword solution of the moment, there’s always a catch, and it seems like some have taken the content first approach a little too literally. By which I mean, it’s literally the first thing they do. The project starts, there’s a very cursory nod towards gathering requirements, and off they go, cranking content. Writing copy, making video, commissioning illustrations. All that’s happened is that the ‘making stuff up’ part has shifted along the line, away from layout and UI, back to the content. Starting is too easy I can’t remember where I first heard that phrase, but it’s a great sentiment which applies to so much of what we do on the web. The medium is so accessible and to an extent disposable; throwing things together quickly carries far less burden than in any other industry. We’re used to tweaking as we go, changing bits, iterating things into shape. The ubiquitous beta tag has become the ultimate caveat, and has made the unfinished and unpolished acceptable. Of course, that can work brilliantly in some circumstances. Occasionally, a product offers such a paradigm shift it’s beyond the level of deep planning and prelaunch finessing we’d ideally like. But, in the main, for most client sites we work on, there really is no excuse not to do things properly. To ask the tricky questions, to challenge preconceptions and really understand the Ws behind the products we’re making before we even start. The four Ws For product definition, only four of the five Ws really apply, although there’s a lot of discussion around the idea of when being an influencing factor. For example, the context of a user’s engagement with your product is something you can make a call on depending on the specifics of the project. So, here’s my take on the four essential Ws. I’ll point out here that, of course, these are not intended to be autocratic dictums. Your needs may differ, your clients’ needs may differ, but these four starting points will get you pretty close to where you need to be. Who It’s surprising just how many projects start without a real understanding of the intended audience. Many clients think they have an idea, but without really knowing – it’s presumptive at best, and we all know what presumption is the mother of, right? Of course, we can’t know our audiences in the same way a small shop owner might know their customers. But we can at least strive to find out what type of people are likely to be using the product. I’m not talking about deep user research. That should come later. These are the absolute basics. What’s the context for their visit? How informed are they? What’s their level of comprehension? Are they able to self-identify and relate to categories you have created? I could go on, and it changes on a per-project basis. You’ll only find this out by speaking to them, if not in person, then indirectly through surveys, questionnaires or polls. The mechanism is less important than actually reaching out and engaging with them, because without that understanding it’s impossible to start to design with any empathy. What Once you become deeply involved directly with a product or service, it’s notoriously difficult to see things as an outsider would. You learn the thing inside and out, you develop shortcuts and internal phraseology. Colloquialisms creep in. You become too close. So it’s no surprise when clients sometimes struggle to explain what it is their product actually does in a way that others can understand. Often products are complex but, really, the core reasons behind someone wanting to use that product are very simple. There’s a value proposition for the customer and, if they choose to engage with it, there’s a value exchange. If that proposition or exchange isn’t transparent, then people become confused and will likely go elsewhere. Make sure both your client and you really understand what that proposition is and, in turn, what the expected exchange should be. In a nutshell: what is the intended outcome of that engagement? Often the best way to do this is strip everything back to nothing. Verbosity is rife on the web. Just because it’s easy to create content, that shouldn’t be a reason to do so. Figure out what the value proposition is and then reintroduce content elements that genuinely help explain or present that to a level that is appropriate for the audience. Why In advertising, they talk about the truths behind a product or service. Truths can be both tangible or abstract, but the most important part is the resonance those truths hit with a customer. In a digital product or service those truths are often exposed as benefits. Why is this what I need? Why will it work for me? Why should I trust you? The why is one of the more fluffy Ws, yet it’s such an important one to nail. Clients can get prickly when you ask them to justify the why behind their product, but it’s a fantastic way to make sure the value proposition is clear, realistic and meets with the expectations of both client and customer. It’s our job as designers to question things: we’re not just a pair of hands for clients. Just recently I spoke to a potential client about a site for his business. I asked him why people would use his product and also why his product seemed so fractured in its direction. He couldnt answer that question so, instead of ploughing on regardless, he went back to his directors and is now re-evaluating that business. It was awkward but he thanked me and hopefully he’ll have a better product as a result. Where In this instance, where is not so much a geographical thing, although in some cases that level of context may indeed become a influencing factor… The where we’re talking about here is the position of the product in relation to others around it. By looking at competitors or similar services around the one you are designing, you can start to get a sense for many of the things that are otherwise hard to pin down or have yet to be defined. For example, in a collection of sites all selling cars, where does yours fit most closely? Where are the overlaps? How are they communicating to their customers? How is the product range presented or categorized? It’s good to look around and see how others are doing it. Not in a quest for homogeneity but more to reference or to avoid certain patterns that may or may not make sense for your own particular product. Clients often strive to be different for the sake of it. They feel they need to provide distinction by going against the flow a bit. We know different. We know users love convention. They embrace familiar mental models. They’re comfortable with things that they’ve experienced elsewhere. By showing your client that position is a vital part of their strategy, you can help shape their product into something great. To conclude So there we have it – the four Ws. Each part tells a different and vital part of the story you need to be able to make a really good product. It might sound like a lot of work, particularly when the client is breathing down your neck expecting to see things, but without those pieces in place, the story you’re building your product on, and the content that you’re creating to form that product can only ever fit into one genre. Fiction. 2011 Alex Morris alexmorris 2011-12-10T00:00:00+00:00 https://24ways.org/2011/context-first/ content
293 A Favor for Your Future Self We tend to think about the future when we build things. What might we want to be able to add later? How can we refactor this down the road? Will this be easy to maintain in six months, a year, two years? As best we can, we try to think about the what-ifs, and build our websites, systems, and applications with this lens. We comment our code to explain what we knew at the time and how that impacted how we built something. We add to-dos to the things we want to change. These are all great things! Whether or not we come back to those to-dos, refactor that one thing, or add new features, we put in a bit of effort up front just in case to give us a bit of safety later. I want to talk about a situation that Past Alicia and Team couldn’t even foresee or plan for. Recently, the startup I was a part of had to remove large sections of our website. Not just content, but entire pages and functionality. It wasn’t a very pleasant experience, not only for the reason why we had to remove so much of what we had built, but also because it’s the ultimate “I really hope this doesn’t break something else” situation. It was a stressful and tedious effort of triple checking that the things we were removing weren’t dependencies elsewhere. To be honest, we wouldn’t have been able to do this with any amount of success or confidence without our test suite. Writing tests for code is one of those things that developers really, really don’t want to do. It’s one of the easiest things to cut in the development process, and there’s often a struggle to have developers start writing tests in the first place. One of the best lessons the web has taught us is that we can’t, in good faith, trust the happy path. We must make sure ourselves, and our users, aren’t in a tough spot later on because we only thought of the best case scenarios. JavaScript Regardless of your opinion on whether or not everything needs to be built primarily with JavaScript, if you’re choosing to build a JavaScript heavy app, you absolutely should be writing some combination of unit and integration tests. Unit tests are for testing extremely isolated and small pieces of code, which we refer to as the units themselves. Great for reused functions and small, scoped areas, this is the closest you examine your code with the testing microscope. For example, if we were to build a calculator, the most minute piece we could test could be the basic operations. /* * This example uses a test framework called Jasmine */ describe("Calculator Operations", function () { it("Should add two numbers", function () { // Say we have a calculator Calculator.init(); // We can run the function that does our addition calculation... var result = Calculator.addNumbers(7,3); // ...and ensure we're getting the right output expect(result).toBe(10); }); }); Even though these teeny bits work in isolation, we should ensure that connecting the large pieces work, as well. This is where integration tests excel. These tests ensure that two or more different areas of code, that may not directly know about each other, still behave in expected ways. Let’s build upon our calculator - we may want the operations to be saved in memory after a calculation runs. This isn’t as suited for a unit test because there are a few other moving pieces involved in the process (the calculations, checking if the result was an error, etc.). it(“Should remember the last calculation”, function () { // Run an operation Calculator.addNumbers(7,10); // Expect something else to have happened as a result expect(Calculator.updateCurrentValue).toHaveBeenCalled(); expect(Calculator.currentValue).toBe(17); }); Unit and integration tests provide assurance that your hand-rolled JavaScript should, for the most part, never fail in a grand fashion. Although it still might happen, you could be able to catch problems way sooner than without a test suite, and hopefully never push those failures to your production environment. Interfaces Regardless of how you’re building something, it most definitely has some kind of interface. Whether you’re using a very barebones structure, or you’re leveraging a whole design system, these things can be tested as well. Acceptance testing helps us ensure that users can get from point A to point B within our web things, which can provide assurance that major features are always functioning properly. By simulating user input and data entry, we can go through whole user workflows to test for both success and failure scenarios. These are not necessarily for simulating edge-case scenarios, but rather ensuring that our core offerings are stable. For example, if your site requires signup, you want to make sure the workflow is behaving as expected - allowing valid information to go through signup, while invalid information does not let you progress. /* * This example uses Jasmine along with an add-on called jasmine-integration */ describe("Acceptance tests", function () { // Go to our signup page var page = visit("/signup"); // Fill our signup form with invalid information page.fill_in("input[name='email']", "Not An Email"); page.fill_in("input[name='name']", "Alicia"); page.click("button[type=submit]"); // Check that we get an expected error message it("Shouldn't allow signup with invalid information", function () { expect(page.find("#signupError").hasClass("hidden")).toBeFalsy(); }); // Now, fill our signup form with valid information page.fill_in("input[name='email']", "thisismyemail@gmail.com"); page.fill_in("input[name='name']", "Gerry"); page.click("button[type=submit]"); // Check that we get an expected success message and the error message is hidden it("Should allow signup with valid information", function () { expect(page.find("#signupError").hasClass("hidden")).toBeTruthy(); expect(page.find("#thankYouMessage").hasClass("hidden")).toBeFalsy(); }); }); In terms of visual design, we’re now able to take snapshots of what our interfaces look like before and after any code changes to see what has changed. We call this visual regression testing. Rather than being a pass or fail test like our other examples thus far, this is more of an awareness test, intended to inform developers of all the visual differences that have occurred, intentional or not. Developers may accidentally introduce a styling change or fix that has unintended side effects on other areas of a website - visual regression testing helps us catch these sooner rather than later. These do require a bit more consistent grooming than other tests, but can be valuable in major CSS refactors or if your CSS is generally a bit like Jenga. Tools like PhantomCSS will take screenshots of your pages, and do a visual comparison to check what has changed between two sets of images. The code would look something like this: /* * This example uses PhantomCSS */ casper.start("/home").then(function(){ // Initial state of form phantomcss.screenshot("#signUpForm", "sign up form"); // Hit the sign up button (should trigger error) casper.click("button#signUp"); // Take a screenshot of the UI component phantomcss.screenshot("#signUpForm", "sign up form error"); // Fill in form by name attributes & submit casper.fill("#signUpForm", { name: "Alicia Sedlock", email: "alicia@example.com" }, true); // Take a second screenshot of success state phantomcss.screenshot("#signUpForm", "sign up form success"); }); You run this code before starting any development, to create your baseline set of screen captures. After you’ve completed a batch of work, you run PhantomCSS again. This will create a second batch of screenshots, which are then put through an image comparison tool to display any differences that occurred. Say you changed your margins on our form elements – your image diff would look something like this: This is a great tool for ensuring not just your site retains its expected styling, but it’s also great for ensuring nothing accidentally changes in the living style guide or modular components you may have developed. It’s hard to keep eagle eyes on every visual aspect of your site or app, so visual regression testing helps to keep these things monitored. Conclusion The shape and size of what you’re testing for your site or app will vary. You may not need lots of unit or integration tests if you don’t write a lot of JavaScript. You may not need visual regression testing for a one page site. It’s important to assess your codebase to see which tests would provide the most benefit for you and your team. Writing tests isn’t a joy for most developers, myself included. But I end up thanking Past Alicia a lot when there are tests, because otherwise I would have introduced a lot of issues into codebases. Shipping code that’s broken breaks trust with our users, and it’s our responsibility as developers to make sure that trust isn’t broken. Testing shouldn’t be considered a “nice to have” - it should be an integral piece of our workflow and our day-to-day job. 2016 Alicia Sedlock aliciasedlock 2016-12-03T00:00:00+00:00 https://24ways.org/2016/a-favor-for-your-future-self/ code
304 Five Lessons From My First 18 Months as a Dev I recently moved from Sydney to London to start a dream job with Twitter as a software engineer. A software engineer! Who would have thought. Having started my career as a journalist, the title ‘engineer’ is very strange to me. The notion of writing in first person is also very strange. Journalists are taught to be objective, invisible, to keep yourself out of the story. And here I am writing about myself on a public platform. Cringe. Since I started learning to code I’ve often felt compelled to write about my experience. I want to share my excitement and struggles with the world! But as a junior I’ve been held back by thoughts like ‘whatever you have to say won’t be technical enough’, ‘any time spent writing a blog would be better spent writing code’, ‘blogging is narcissistic’, etc.  Well, I’ve been told that your thirties are the years where you stop caring so much about what other people think. And I’m almost 30. So here goes! These are five key lessons from my first year and a half in tech: Deployments should delight, not dread Lesson #1: Making your deployment process as simple as possible is worth the investment. In my first dev job, I dreaded deployments. We would deploy every Sunday night at 8pm. Preparation would begin the Friday before. A nominated deployment manager would spend half a day tagging master, generating scripts, writing documentation and raising JIRAs. The only fun part was choosing a train gif to post in HipChat: ‘All aboard! The deployment train leaves in 3, 2, 1…” When Sunday night came around, at least one person from every squad would need to be online to conduct smoke tests. Most times, the deployments would succeed. Other times they would fail. Regardless, deployments ate into people’s weekend time — and they were intense. Devs would rush to have their code approved before the Friday cutoff. Deployment managers who were new to the process would fear making a mistake.  The team knew deployments were a problem. They were constantly striving to improve them. And what I’ve learnt from Twitter is that when they do, their lives will be bliss. TweetDeck’s deployment process fills me with joy and delight. It’s quick, easy and stress free. In fact, it’s so easy I deployed code on my first day in the job! Anyone can deploy, at any time of day, with a single command. Rollbacks are just as simple. There’s no rush to make the deployment train. No manual preparation. No fuss. Value — whether in the form of big new features, simple UI improvements or even production bug fixes — can be shipped in an instant. The team assures me the process wasn’t always like this. They invested lots of time in making their deployments better. And it’s clearly paid off. Code reviews need love, time and acceptance Lesson #2: Code reviews are a three-way gift. Every time I review someone else’s code, I help them, the team and myself. Code reviews were another pain point in my previous job. And to be honest, I was part of the problem. I would raise code reviews that were far too big. They would take days, sometimes weeks, to get merged. One of my reviews had 96 comments! I would rarely review other people’s code because I felt too junior, like my review didn’t carry any weight.  The review process itself was also tiring, and was often raised in retrospectives as being slow. In order for code to be merged it needed to have ticks of approval from two developers and a third tick from a peer tester. It was the responsibility of the author to assign the reviewers and tester. It was felt that if it was left to team members to assign themselves to reviews, the “someone else will do it” mentality would kick in, and nothing would get done. At TweetDeck, no-one is specifically assigned to reviews. Instead, when a review is raised, the entire team is notified. Without fail, someone will jump on it. Reviews are seen as blocking. They’re seen to be equally, if not more important, than your own work. I haven’t seen a review sit for longer than a few hours without comments.  We also don’t work on branches. We push single commits for review, which are then merged to master. This forces the team to work in small, incremental changes. If a review is too big, or if it’s going to take up more than an hour of someone’s time, it will be sent back. What I’ve learnt so far at Twitter is that code reviews must be small. They must take priority. And they must be a team effort. Being a new starter is no “get out of jail free card”. In fact, it’s even more of a reason to be reviewing code. Reviews are a great way to learn, get across the product and see different programming styles. If you’re like me, and find code reviews daunting, ask to pair with a senior until you feel more confident. I recently paired with my mentor at Twitter and found it really helpful. Get friendly with feature flagging Lesson #3: Feature flagging gives you complete control over how you build and release a project. Say you’re implementing a new feature. It’s going to take a few weeks to complete. You’ll complete the feature in small, incremental changes. At what point do these changes get merged to master? At what point do they get deployed? Do you start at the back end and finish with the UI, so the user won’t see the changes until they’re ready? With feature flagging — it doesn’t matter. In fact, with feature flagging, by the time you are ready to release your feature, it’s already deployed, sitting happily in master with the rest of your codebase.  A feature flag is a boolean value that gets wrapped around the code relating to the thing you’re working on. The code will only be executed if the value is true. if (TD.decider.get(‘new_feature’)) { //code for new feature goes here } In my first dev job, I deployed a navigation link to the feature I’d been working on, making it visible in the product, even though the feature wasn’t ready. “Why didn’t you use a feature flag?” a senior dev asked me. An honest response would have been: “Because they’re confusing to implement and I don’t understand the benefits of using them.” The fix had to wait until the next deployment. The best thing about feature flagging at TweetDeck is that there is no need to deploy to turn on or off a feature. We set the status of the feature via an interface called Deckcider, and the code makes regular API requests to get the status.  At TweetDeck we are also able to roll our features out progressively. The first rollout might be to a staging environment. Then to employees only. Then to 10 per cent of users, 20 per cent, 30 per cent, and so on. A gradual rollout allows you to monitor for bugs and unexpected behaviour, before releasing the feature to the entire user base. Sometimes a piece of work requires changes to existing business logic. So the code might look more like this: if (TD.decider.get(‘change_to_existing_feature’)) { //new logic goes here } else { //old logic goes here } This seems messy, right? Riddling your code with if else statements to determine which path of logic should be executed, or which version of the UI should be displayed. But at Twitter, this is embraced. You can always clean up the code once a feature is turned on. This isn’t essential, though. At least not in the early days. When a cheeky bug is discovered, having the flag in place allows the feature to be very quickly turned off again. Let data and experimentation drive development Lesson #4: Use data to determine the direction of your product and measure its success. The first company I worked for placed a huge amount of emphasis on data-driven decision making. If we had an idea, or if we wanted to make a change, we were encouraged to “bring data” to show why it was necessary. “Without data, you’re just another person with an opinion,” the chief data scientist would say. This attitude helped to ensure we were building the right things for our customers. Instead of just plucking a new feature out of thin air, it was chosen based on data that reflected its need. But how do you design that feature? How do you know that the design you choose will have the desired impact? That’s where experiments come into play.  At TweetDeck we make UI changes that we hope will delight our users. But the assumptions we make about our users are often wrong. Our front-end team recently sat in a room and tried to guess which UIs from A/B tests had produced better results. Half the room guessed incorrectly every time. We can’t assume a change we want to make will have the impact we expect. So we run an experiment. Here’s how it works. Users are placed into buckets. One bucket of users will have access to the new feature, the other won’t. We hypothesise that the bucket exposed to the new feature will have better results. The beauty of running an experiment is that we’ll know for sure. Instead of blindly releasing the feature to all users without knowing its impact, once the experiment has run its course, we’ll have the data to make decisions accordingly. Hire the developer, not the degree Lesson #5: Testing candidates on real world problems will allow applicants from all backgrounds to shine. Surely, a company like Twitter would give their applicants insanely difficult code tests, and the toughest technical questions, that only the cleverest CS graduates could pass, I told myself when applying for the job. Lucky for me, this wasn’t the case. The process was insanely difficult—don’t get me wrong—but the team at TweetDeck gave me real world problems to solve. The first code test involved bug fixes, performance and testing. The second involved DOM traversal and manipulation. Instead of being put on the spot in a room with a whiteboard and pen I was given a task, access to the internet, and time to work on it. Similarly, in my technical interviews, I was asked to pair program on real world problems that I was likely to face on the job. In one of my phone screenings I was told Twitter wanted to increase diversity in its teams. Not just gender diversity, but also diversity of experience and background. Six months later, with a bunch of new hires, team lead Tom Ashworth says TweetDeck has the most diverse team it’s ever had. “We designed an interview process that gave us a way to simulate the actual job,” he said. “It’s not about testing whether you learnt an algorithm in school.” Is this lowering the bar? No. The bar is whether a candidate has the ability to solve problems they are likely to face on the job. I recently spoke to a longstanding Atlassian engineer who said they hadn’t seen an algorithm in their seven years at the company. These days, only about 50 per cent of developers have computer science degrees. The majority of developers are self taught, learn on the job or via online courses. If you want to increase diversity in your engineering team, ensure your interview process isn’t excluding these people. 2016 Amy Simmons amysimmons 2016-12-20T00:00:00+00:00 https://24ways.org/2016/my-first-18-months-as-a-dev/ process
163 Get To Grips with Slippy Maps Online mapping has definitely hit mainstream. Google Maps made ‘slippy maps’ popular and made it easy for any developer to quickly add a dynamic map to his or her website. You can now find maps for store locations, friends nearby, upcoming events, and embedded in blogs. In this tutorial we’ll show you how to easily add a map to your site using the Mapstraction mapping library. There are many map providers available to choose from, each with slightly different functionality, design, and terms of service. Mapstraction makes deciding which provider to use easy by allowing you to write your mapping code once, and then easily switch providers. Assemble the pieces Utilizing any of the mapping library typically consists of similar overall steps: Create an HTML div to hold the map Include the Javascript libraries Create the Javascript Map element Set the initial map center and zoom level Add markers, lines, overlays and more Create the Map Div The HTML div is where the map will actually show up on your page. It needs to have a unique id, because we’ll refer to that later to actually put the map here. This also lets you have multiple maps on a page, by creating individual divs and Javascript map elements. The size of the div also sets the height and width of the map. You set the size using CSS, either inline with the element, or via a CSS reference to the element id or class. For this example, we’ll use inline styling. <div id="map" style="width: 400px; height: 400px;"></div> Include Javascript libraries A mapping library is like any Javascript library. You need to include the library in your page before you use the methods of that library. For our tutorial, we’ll need to include at least two libraries: Mapstraction, and the mapping API(s) we want to display. Our first example we’ll use the ubiquitous Google Maps library. However, you can just as easily include Yahoo, MapQuest, or any of the other supported libraries. Another important aspect of the mapping libraries is that many of them require an API key. You will need to agree to the terms of service, and get an API key these. <script src="http://maps.google.com/maps?file=api&v=2&key=YOUR_KEY" type="text/javascript"></script> <script type="text/javascript" src="http://mapstraction.com/src/mapstraction.js"></script> Create the Map Great, we’ve now put in all the pieces we need to start actually creating our map. This is as simple as creating a new Mapstraction object with the id of the HTML div we created earlier, and the name of the mapping provider we want to use for this map. With several of the mapping libraries you will need to set the map center and zoom level before the map will appear. The map centering actually triggers the initialization of the map. var mapstraction = new Mapstraction('map','google'); var myPoint = new LatLonPoint(37.404,-122.008); mapstraction.setCenterAndZoom(myPoint, 10); A note about zoom levels. The setCenterAndZoom function takes two parameters, the center as a LatLonPoint, and a zoom level that has been defined by mapping libraries. The current usage is for zoom level 1 to be “zoomed out”, or view the entire earth – and increasing the zoom level as you zoom in. Typically 17 is the maximum zoom, which is about the size of a house. Different mapping providers have different quality of zoomed in maps over different parts of the world. This is a perfect reason why using a library like Mapstraction is very useful, because you can quickly change mapping providers to accommodate users in areas that have bad coverage with some maps. To switch providers, you just need to include the Javascript library, and then change the second parameter in the Mapstraction creation. Or, you can call the switch method to dynamically switch the provider. So for Yahoo Maps (demo): var mapstraction = new Mapstraction('map','yahoo'); or Microsoft Maps (demo): var mapstraction = new Mapstraction('map','microsoft'); want a 3D globe in your browser? try FreeEarth (demo): var mapstraction = new Mapstraction('map','freeearth'); or even OpenStreetMap (free your data!) (demo): var mapstraction = new Mapstraction('map','openstreetmap'); Visit the Mapstraction multiple map demo page for an example of how easy it is to have many maps on your page, each with a different provider. Adding Markers While adding your first map is fun, and you can probably spend hours just sliding around, the point of adding a map to your site is usually to show the location of something. So now you want to add some markers. There are a couple of ways to add to your map. The simplest is directly creating markers. You could either hard code this into a rather static page, or dynamically generate these using whatever tools your site is built on. var marker = new Marker( new LatLonPoint(37.404,-122.008) ); marker.setInfoBubble("It's easy to add maps to your site"); mapstraction.addMarker( marker ); There is a lot more you can do with markers, including changing the icon, adding timestamps, automatically opening the bubble, or making them draggable. While it is straight-forward to create markers one by one, there is a much easier way to create a large set of markers. And chances are, you can make it very easy by extending some data you already are sharing: RSS. Specifically, using GeoRSS you can easily add a large set of markers directly to a map. GeoRSS is a community built standard (like Microformats) that added geographic markup to RSS and Atom entries. It’s as simple as adding <georss:point>42 -83</georss:point> to your feeds to share items via GeoRSS. Once you’ve done that, you can add that feed as an ‘overlay’ to your map using the function: mapstraction.addOverlay("http://api.flickr.com/services/feeds/groups_pool.gne?id=322338@N20&format=rss_200&georss=1"); Mapstraction also supports KML for many of the mapping providers. So it’s easy to add various data sources together with your own data. Check out Mapufacture for a growing index of available GeoRSS feeds and KML documents. Play with your new toys Mapstraction offers a lot more functionality you can utilize for demonstrating a lot of geographic data on your website. It also includes geocoding and routing abstraction layers for making sure your users know where to go. You can see more on the Mapstraction website: http://mapstraction.com. 2007 Andrew Turner andrewturner 2007-12-02T00:00:00+00:00 https://24ways.org/2007/get-to-grips-with-slippy-maps/ code
247 Managing Flow and Rhythm with CSS Custom Properties An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn’t just make your web pages and views look better, but it can also improve the scan-ability. Browsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as <div> and <section> also have no default margin or padding associated with them. I’ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I’m going to show you how it works. Let’s dive in! The Flow utility With the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it’s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid. That’s fine for 90% of contexts, but sometimes, it’s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy. The code .flow { --flow-space: 1em; } .flow > * + * { margin-top: var(--flow-space); } What this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don’t have any idea what surrounds them. The other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let’s look at some examples. A basic example See the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/LXqerj What we’ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there’s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility. Because our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a <h2> in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we’d affect everything because margin values would be calculated as 1.1X the font size. This example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though? See the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/qQgxaY I like lots of whitespace with my article layouts, so the 1em space isn’t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances: h2 { --flow-space: 3rem; } Notice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it’s better for accessibility to use flexible units like em, rem and %, so that a user’s font size preferences are honoured. A more advanced example Although the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space. See the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen. https://codepen.io/hankchizljaw/pen/ZmPGyL In this example, we’ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article’s content would space itself: <article class="media__content flow"> ... </article> Something to remember though: the custom properties cascade in the same way that other CSS values do, so you’ve got to keep that in mind. We’ve got a great example of that in this example where because we’ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we’ve had to set another value on the .features__list element. “But what about old browsers?”, I hear you cry We’re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element’s font-size: .flow { --flow-space: 1em; } .flow > * + * { margin-top: 1em; margin-top: var(--flow-space); } Thanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them—those that don’t will ignore them. Yay for the cascade and progressive enhancement! Wrapping up This tiny little utility can bring great power for when you want to consistently space elements, vertically. It also—thanks to the power of the modern web—allows us to create contextual overrides without creating modifier classes or shame CSS. If you’ve got other methods of doing this sort of work, please let me know on Twitter. I’d love to see what you’re working on! 2018 Andy Bell andybell 2018-12-07T00:00:00+00:00 https://24ways.org/2018/managing-flow-and-rhythm-with-css-custom-properties/ code
138 Rounded Corner Boxes the CSS3 Way If you’ve been doing CSS for a while you’ll know that there are approximately 3,762 ways to create a rounded corner box. The simplest techniques rely on the addition of extra mark-up directly to your page, while the more complicated ones add the mark-up though DOM manipulation. While these techniques are all very interesting, they do seem somewhat of a kludge. The goal of CSS is to separate structure from presentation, yet here we are adding superfluous mark-up to our code in order to create a visual effect. The reason we are doing this is simple. CSS2.1 only allows a single background image per element. Thankfully this looks set to change with the addition of multiple background images into the CSS3 specification. With CSS3 you’ll be able to add not one, not four, but eight background images to a single element. This means you’ll be able to create all kinds of interesting effects without the need of those additional elements. While the CSS working group still seem to be arguing over the exact syntax, Dave Hyatt went ahead and implemented the currently suggested mechanism into Safari. The technique is fiendishly simple, and I think we’ll all be a lot better off once the W3C stop arguing over the details and allow browser vendors to get on and provide the tools we need to build better websites. To create a CSS3 rounded corner box, simply start with your box element and apply your 4 corner images, separated by commas. .box { background-image: url(top-left.gif), url(top-right.gif), url(bottom-left.gif), url(bottom-right.gif); } We don’t want these background images to repeat, which is the normal behaviour, so lets set all their background-repeat properties to no-repeat. .box { background-image: url(top-left.gif), url(top-right.gif), url(bottom-left.gif), url(bottom-right.gif); background-repeat: no-repeat, no-repeat, no-repeat, no-repeat; } Lastly, we need to define the positioning of each corner image. .box { background-image: url(top-left.gif), url(top-right.gif), url(bottom-left.gif), url(bottom-right.gif); background-repeat: no-repeat, no-repeat, no-repeat, no-repeat; background-position: top left, top right, bottom left, bottom right; } And there we have it, a simple rounded corner box with no additional mark-up. As well as using multiple background images, CSS3 also has the ability to create rounded corners without the need of any images at all. You can do this by setting the border-radius property to your desired value as seen in the next example. .box { border-radius: 1.6em; } This technique currently works in Firefox/Camino and creates a nice, if somewhat jagged rounded corner. If you want to create a box that works in both Mozilla and WebKit based browsers, why not combine both techniques and see what happens. 2006 Andy Budd andybudd 2006-12-04T00:00:00+00:00 https://24ways.org/2006/rounded-corner-boxes-the-css3-way/ code
333 The Attribute Selector for Fun and (no ad) Profit If I had a favourite CSS selector, it would undoubtedly be the attribute selector (Ed: You really need to get out more). For those of you not familiar with the attribute selector, it allows you to style an element based on the existence, value or partial value of a specific attribute. At it’s very basic level you could use this selector to style an element with particular attribute, such as a title attribute. <abbr title="Cascading Style Sheets">CSS</abbr> In this example I’m going to make all <abbr> elements with a title attribute grey. I am also going to give them a dotted bottom border that changes to a solid border on hover. Finally, for that extra bit of feedback, I will change the cursor to a question mark on hover as well. abbr[title] { color: #666; border-bottom: 1px dotted #666; } abbr[title]:hover { border-bottom-style: solid; cursor: help; } This provides a nice way to show your site users that <abbr> elements with title tags are special, as they contain extra, hidden information. Most modern browsers such as Firefox, Safari and Opera support the attribute selector. Unfortunately Internet Explorer 6 and below does not support the attribute selector, but that shouldn’t stop you from adding nice usability embellishments to more modern browsers. Internet Explorer 7 looks set to implement this CSS2.1 selector, so expect to see it become more common over the next few years. Styling an element based on the existence of an attribute is all well and good, but it is still pretty limited. Where attribute selectors come into their own is their ability to target the value of an attribute. You can use this for a variety of interesting effects such as styling VoteLinks. VoteWhats? If you haven’t heard of VoteLinks, it is a microformat that allows people to show their approval or disapproval of a links destination by adding a pre-defined keyword to the rev attribute. For instance, if you had a particularly bad meal at a restaurant, you could signify your dissaproval by adding a rev attribute with a value of vote-against. <a href="http://www.mommacherri.co.uk/" rev="vote-against">Momma Cherri's</a> You could then highlight these links by adding an image to the right of these links. a[rev="vote-against"]{ padding-right: 20px; background: url(images/vote-against.png) no-repeat right top; } This is a useful technique, but it will only highlight VoteLinks on sites you control. This is where user stylesheets come into effect. If you create a user stylesheet containing this rule, every site you visit that uses VoteLinks will receive your new style. Cool huh? However my absolute favourite use for attribute selectors is as a lightweight form of ad blocking. Most online adverts conform to industry-defined sizes. So if you wanted to block all banner-ad sized images, you could simply add this line of code to your user stylesheet. img[width="468"][height="60"], img[width="468px"][height="60px"] { display: none !important; } To hide any banner-ad sized element, such as flash movies, applets or iFrames, simply apply the above rule to every element using the universal selector. *[width="468"][height="60"], *[width="468px"][height="60px"] { display: none !important; } Just bare in mind when using this technique that you may accidentally hide something that isn’t actually an advert; it just happens to be the same size. The Interactive Advertising Bureau lists a number of common ad sizes. Using these dimensions, you can create stylesheet that blocks all the popular ad formats. Apply this as a user stylesheet and you never need to suffer another advert again. Here’s wishing you a Merry, ad-free Christmas. 2005 Andy Budd andybudd 2005-12-11T00:00:00+00:00 https://24ways.org/2005/the-attribute-selector-for-fun-and-no-ad-profit/ code
14 The Command Position Principle Living where I do, in a small village in rural North Wales, getting anywhere means driving along narrow country roads. Most of these are just about passable when two cars meet. If you’re driving too close to the centre of the road, when two drivers meet you stop, glare at each other and no one goes anywhere. Drive too close to your nearside and in summer you’ll probably scratch your paintwork on the hedgerows, or in winter you’ll sink your wheels into mud. Driving these lanes requires a balance between caring for your own vehicle and consideration for someone else’s, but all too often, I’ve seen drivers pushed towards the hedgerows and mud when someone who’s inconsiderate drives too wide because they don’t want to risk scratching their own paintwork or getting their wheels dirty. If you learn to ride a motorcycle, you’ll be taught about the command position: Approximate central position, or any position from which the rider can exert control over invitation space either side. The command position helps motorcyclists stay safe, because when they ride in the centre of their lane it prevents other people, usually car drivers, from driving alongside, either forcing them into the curb or potentially dangerously close to oncoming traffic. Taking the command position isn’t about motorcyclists being aggressive, it’s about them being confident. It’s them knowing their rightful place on the road and communicating that through how they ride. I’ve recently been trying to take that command position when driving my car on our lanes. When I see someone coming in the opposite direction, instead of instinctively moving closer to my nearside — and in so doing subconsciously invite them into my space on the road — I hold both my nerve and a central position in my lane. Since I done this I’ve noticed that other drivers more often than not stay in their lane or pull closer to their nearside so we occupy equal space on the road. Although we both still need to watch our wing mirrors, neither of us gets our paint scratched or our wheels muddy. We can apply this principle to business too, in particular to negotiations and the way we sell. Here’s how we might do that. Commanding negotiations When a customer’s been sold to well — more on that in just a moment — and they’ve made the decision to buy, the thing that usually stands in the way of us doing business is a negotiation over price. Some people treat negotiations as the equivalent of driving wide. They act offensively, because their aim is to force the other person into getting less, usually in return for giving more. In encounters like this, it’s easy for us to act defensively. We might lack confidence in the price we ask for, or the value of the product or service we offer. We might compromise too early because of that. When that happens, there’s a pretty good chance that we’ll drive away with less than we deserve unless we use the command position principle to help us. Before we start any negotiation it’s important to know that both sides ultimately want to reach an agreement. This isn’t always obvious. If one side isn’t already committed, at least in principle, then it’s not a negotiation at that point, it’s something else. For example, a prospective customer may be looking to learn our lowest price so that they can compare it to our competitors. When that’s the case, we’ve probably failed to qualify that prospect properly as, after all, who wants to be chosen simply because they’re the cheapest? In this situation, negotiating is a waste of time since we don’t yet know that it will result in us making a deal. We should enter into a negotiation only when we know where we stand. So ask confidently: “Are you looking to [make a decision]?” When that’s been confirmed, it’s down to everyone to compromise until a deal’s been reached. That’s because good negotiations aren’t about one side beating the other, they’re about achieving a good deal for both. Using the command position principle helps us to maintain control over our negotiating space and affords us the opportunity to give ground only if we need to and only when we’re ready. It can also ensure that the person we’re negotiating with gives up some of their space. Commanding sales It’s not always necessary to negotiate when we’re doing a business deal, but we should always be prepared to sell. One of the most important parts of our sales process should be controlling when and how we tell someone our price. Unless it’s impossible to avoid, don’t work out a price for someone on the spot. When we do that we lose control over the time and place for presenting our price alongside the value factors that will contribute to the prospective customer accepting that price. For the same reason, never give a ballpark or, worse, a guesstimate figure. If the question of price comes up before we’re fully prepared, we should say politely that we need more time to work out a meaningful cost. When we are ready, we shouldn’t email a price for our prospective customer to read unaccompanied. Instead, create an opportunity to talk a prospect through our figures, demonstrate how we arrived at them and, most importantly, explain the value of what we’re selling to their business. Agree a time and place to do this and, if possible, do it all face-to-face. We shouldn’t hesitate when we give someone a price. When we sound even the slightest bit unsure or apologetic, we give the impression that we’ll be flexible in our position before negotiations have even begun. Think about the command position principle, know the price and present it confidently. That way we send a clear signal that we know our business and how we deal with people. The command position principle isn’t about being cocky, it’s about showing other people respect, asking for it in return and showing it to ourselves. Earlier, I mentioned selling well, because we sometimes hear people say that they dislike being sold to. In my experience, it’s not that people dislike the sales process, it’s that we dislike it done badly. Taking part in a good sales process, either by selling or being sold to, can be a pleasurable experience. Try to be confident — after all, we understand how our skills will benefit a customer better than anyone else. Our confidence will inspire confidence in others. Self-confidence isn’t the same as arrogance, just as the command position isn’t the same as riding without consideration for others. The command position principle preserves others’ space as well as our own. By the same token, we should be considerate of others’ time and not waste it and our own by attempting to force them into buying something that’s inappropriate. To prevent this from happening, evaluate them well to ensure that they’re the right customer for us. If they’re not, let them go on their way. They’ll thank us for it and may well become customers the next time we meet. The business of closing a deal can be made an enjoyable experience for everyone if we take control by guiding someone through the sales process by asking the right questions to uncover their concerns, then allaying them by being knowledgeable and confident. This is riding in the command position. Just like demonstrating we know our rightful position on the road, knowing our rightful place in a business relationship and communicating that through how we deal with people will help everyone achieve an equitable balance. When that happens in business, as well as on the road, no one gets their paintwork scratched or their wheels muddy. 2013 Andy Clarke andyclarke 2013-12-23T00:00:00+00:00 https://24ways.org/2013/the-command-position-principle/ business
44 Taglines and Truisms To bring her good luck, “white rabbits” was the first thing that my grandmother said out loud on the first day of every month. We all need a little luck, but we shouldn’t rely on it, especially when it comes to attracting new clients. The first thing we say to a prospective client when they visit our website for the first time helps them to understand not only what we do but why we do it. We can also help them understand why they should choose to work with us over one of our competitors. Take a minute or two to look at your competitors’ websites. What’s the first thing that they say about themselves? Do they say that they “design delightful digital experiences,” “craft beautiful experiences” or “create remarkable digital experiences?” It’s easy to find companies who introduce themselves with what they do, their proposition, but what a company does is only part of their story. Their beliefs and values, what they stand for why they do what they do are also important. When someone visits our websites for the first time, we have only a brief moment to help them understand us. To help us we can learn from the advertising industry, where the job of a tagline is to communicate a concept, deliver a message and sell a product, often using only a few words. When an advertising campaign is effective, its tagline stays with you, sometimes long after that campaign is over. For example, can you remember which company or brand these taglines help to sell? (Answers at the bottom of the article:) The Ultimate Driving Machine Just Do It Don’t Leave Home Without It A clever tagline isn’t just a play on words, although it can include one. A tagline does far more than help make your company memorable. Used well, it brings together notions of what makes your company and what you offer special. Then it expresses those notions in a few words or possibly a short sentence. I’m sure that everyone can find examples of company slogans written in the type of language that should stay within the walls of a marketing department. We can also find taglines where the meaning is buried so deep that the tag itself becomes effectively meaningless. A meaningful tagline supports our ideas about who we are and what we offer, and provides a platform for different executions of them, sometimes over a period of time. For a tagline to work well, it must allow for current and future ideas about a brand. It must also be meaningful to our brand and describe a truism, a truth that need not be a fact or statistic, but something that’s true about us, who we are, what we do and why that’s distinctive. It can be obvious, funny, serious or specific but above all it must be true. It should also be difficult to argue with, making your messages difficult to argue with too. I doubt that I need remind you who this tagline belongs to: There are some things money can’t buy. For everything else there’s MasterCard. That tagline was launched in 1997 by McCann-Erickson along with the “Priceless” campaign and it helped establish MasterCard as a friendlier credit card company, one with a sense of humour. MasterCard’s truism is that the things which really matter in life can’t be bought. They are worth more than anything that a monetary value can be applied to. In expressing that truism through the tagline, MasterCard’s advertising tells people to use not just any credit card, but their MasterCard, to pay for everything they buy. “Guinness is good for you” may have been a stretch, but “Good things come to those who wait” builds on the truism that patience is a virtue and therefore a good pint of Guinness takes time to pour (119.5 seconds. I know you were wondering.) The fact that British Airways flies to more destinations than any other airline is their truism, and led their advertisers to the now famous tagline, “The world’s favourite airline.” At my company, Stuff & Nonsense, we’ve been thinking about taglines as we think about our position within an industry that seems full of companies who “design”, “craft”, and “create” “delightful”, “beautiful”, “remarkable digital experiences”. Much of what made us different has changed along with the type of work we’re interested in doing. Our work’s expanded beyond websites and now includes design for mobile and other media. It’s true we can’t know how or where it will be seen. The ways that we make it are flexible too as we’re careful not to become tied to particular tools or approaches. It’s also true that we’re a small team. One that’s flexible enough to travel around the world to work alongside our clients. We join their in-house teams and we collaborate with them in ways that other agencies often find more difficult. We know that our clients appreciate our flexibility and have derived enormous value from it. We know that we’ve won business because of it and that it’s now a big part of our proposition. Our truism is that we’re flexible, “Fabulously flexible” as our tagline now expresses. And although we know that there may be other agencies who can be similarly flexible – after all, being flexible is not a unique selling proposition – only we do it so fabulously. As the old year rolls into the new, how will your company describe what you do in 2015? More importantly, how will you tell prospective clients why you do it, what matters to you and why they should work with you? Start by writing a list of truisms about your company. Write as many as you can, but then whittle that list down to just one, the most important truth. Work on that truism to create a tagline that’s meaningful, difficult to be argue with and, above all, uniquely yours. Answers The Ultimate Driving Machine (BMW) Just Do It (Nike) Don’t Leave Home Without It (American Express) 2014 Andy Clarke andyclarke 2014-12-23T00:00:00+00:00 https://24ways.org/2014/taglines-and-truisms/ business
51 Blow Your Own Trumpet Even if your own trumpet’s tiny and fell out of a Christmas cracker, blowing it isn’t something that everyone’s good at. Some people find selling themselves and what they do difficult. But, you know what? Boo hoo hoo. If you want people to buy something, the reality is you’d better get good at selling, especially if that something is you. For web professionals, the best place to tell potential business customers or possible employers about what you do is on your own website. You can write what you want and how you want, but that doesn’t make knowing what to write any easier. As a matter of fact, writing for yourself often proves harder than writing for someone else. I spent this autumn thinking about what I wanted to say about Stuff & Nonsense on the website we relaunched recently. While I did that, I spoke to other designers about how they struggled to write about their businesses. If you struggle to write well, don’t worry. You’re not on your own. Here are five ways to hit the right notes when writing about yourself and your work. Be genuine about who you are I’ve known plenty of talented people who run a successful business pretty much single-handed. Somehow they still feel awkward presenting themselves as individuals. They wonder whether describing themselves as a company will give them extra credibility. They especially agonise over using “we” rather than “I” when describing what they do. These choices get harder when you’re a one-man band trading as a limited company or LLC business entity. If you mainly work alone, don’t describe yourself as anything other than “I”. You might think that saying “we” makes you appear larger and will give you a better chance of landing bigger and better work, but the moment a prospective client asks, “How many people are you?” you’ll have some uncomfortable explaining to do. This will distract them from talking about your work and derail your sales process. There’s no need to be anything other than genuine about how you describe yourself. You should be proud to say “I” because working alone isn’t something that many people have the ability, business acumen or talent to do. Explain what you actually do How many people do precisely the same job as you? Hundreds? Thousands? The same goes for companies. If yours is a design studio, development team or UX consultancy, there are countless others saying exactly what you’re saying about what you do. Simply stating that you code, design or – God help me – “handcraft digital experiences” isn’t enough to make your business sound different from everyone else. Anyone can and usually does say that, but people buy more than deliverables. They buy something that’s unique about you and your business. Potentially thousands of companies deliver code and designs the same way as Stuff & Nonsense, but our clients don’t just buy page designs, prototypes and websites from us. They buy our taste for typography, colour and layout, summed up by our “It’s the taste” tagline and bowler hat tip to the PG Tips chimps. We hope that potential clients will understand what’s unique about us. Think beyond your deliverables to what people actually buy, and sell the uniqueness of that. Describe work in progress It’s sad that current design trends have made it almost impossible to tell one website from another. So many designers now demonstrate finished responsive website designs by pasting them onto iMac, MacBook, iPad and iPhone screens that their portfolios don’t fare much better. Every designer brings their own experience, perspective and process to a project. In my experience, it’s understanding those differences which forms a big part of how a prospective client makes a decision about who to work with. Don’t simply show a prospective client the end result of a previous project; explain your process, the development of your thinking and even the wrong turns you took. Traditional case studies, like the one I’ve just written about Stuff & Nonsense’s work for WWF UK, can take a lot of time. That’s probably why many portfolios get out of date very quickly. Designers make new work all the time, so there must be a better way to show more of it more often, to give prospective clients a clearer understanding of what we do. At Stuff & Nonsense our solution was to create a feed where we could post fragments of design work throughout a project. This also meant rewriting our Contract Killer to give us permission to publish work before someone signs it off. Outline a client’s experience Recently a client took me to one side and offered some valuable advice. She told me that our website hadn’t described anything about the experience she’d had while working with us. She said that knowing more about how we work would’ve helped her make her buying decision. When a client chooses your business, they’re hoping for more than a successful outcome. They want their project to run smoothly. They want to feel that they made a correct decision when they chose you. If they work for an organisation, they’ll want their good judgement to be recognised too. Our client didn’t recognise her experience because we hadn’t made our own website part of it. Remember, the challenge of creating a memorable user experience starts with selling to the people paying you for it. Address your ideal client It’s important to understand that a portfolio’s job isn’t to document your work, it’s to attract new work from clients you want. Make sure that work you show reflects the work you want, because what you include in your portfolio often leads to more of the same. When you’re writing for your portfolio and elsewhere on your website, imagine that you’re addressing your ideal client. Picture them sitting opposite and answer the questions they’d ask as you would in conversation. Be direct, funny if that’s appropriate and serious when it’s not. If it helps, ask a friend to read the questions aloud and record what you say in response. This will help make what you write sound natural. I’ve found this technique helps clients write copy too. Toot your own horn Some people confuse expressing confidence in yourself and your work as boastfulness, but in a competitive world the reality is that if you are to succeed, you need to show confidence so that others can show their confidence in you. If you want people to hear you, pick up your trumpet and blow it. 2015 Andy Clarke andyclarke 2015-12-23T00:00:00+00:00 https://24ways.org/2015/blow-your-own-trumpet/ business
90 Monkey Business “Too expensive.” “Over-priced.” “A bit rich.” They all mean the same thing. When you say that something’s too expensive, you’re doing much more than commenting on a price. You’re questioning the explicit or implicit value of a product or a service. You’re asking, “Will I get out of it what you want me to pay for it?” You’re questioning the competency, judgement and possibly even integrity of the individual or company that gave you that price, even though you don’t realise it. You might not be saying it explicitly, but what you’re implying is, “Have you made a mistake?”, “Am I getting the best deal?”, “Are you being honest with me?”, “Could I get this cheaper?” Finally, you’re being dishonest, because deep down you know all too well that there’s no such thing as too expensive. Why? It doesn’t matter what you’re questioning the price of. It could be a product, a service or the cost of an hour, day or week of someone’s time. Whatever you’re buying, too expensive is always an excuse. Saying it shifts acceptability of a price back to the person who gave it. What you should say, but are too afraid to admit, is: “It’s more money than I wanted to pay.” “It’s more than I estimated it would cost.” “It’s more than I can afford.” Everyone who’s given a price for a product or service will have been told at some point that it’s too expensive. It’s never comfortable to hear that. Thoughts come thick and fast: “What do I do?” “How do I react?” “Do I really want the business?” “Am I prepared to negotiate?” “How much am I willing to compromise?” It’s easy to be defensive when someone questions a price, but before you react, stay calm and remember that if someone says what you’re offering is too expensive, they’re saying more about themselves and their situation than they are about your price. Learn to read that situation and how to follow up with the right questions. Imagine you’ve quoted someone for a week of your time. “That’s too expensive,” they respond. How should you handle that? Think about what they might otherwise be saying. “It’s more money than I want to pay” may mean that they don’t understand the value of your service. How could you respond? Start by asking what similar projects they’ve worked on and the type of people they worked with. Find out what they paid and what they got for their money, because it’s possible what you offer is different from what they had before. Ask if they saw a return on that previous investment. Maybe their problem isn’t with your headline price, but the value they think they’ll receive. Put the emphasis on value and shift the conversation to what they’ll gain, rather than what they’ll spend. It’s also possible they can’t distinguish your service from those of your competitors, so now would be a great time to explain the differences. Do you work faster? Explain how that could help them launch faster, get customers faster, make money faster. Do you include more? Emphasise that, and how unique the experience of working with you will be. “It’s more than I estimated it would cost” could mean that your customer hasn’t done their research properly. You’d never suggest that to them, of course, but you should ask how they’ve arrived at their estimate. Did they base it on work they’ve purchased previously? How long ago was that? Does it come from comparable work or from a different sector? Help your customer by explaining how you arrived at your estimate. Break down each element and while you’re doing that, emphasise the parts of your process that you know will appeal to them. If you know that they’ve had difficulty with something in the past, explain how your approach will benefit them. People almost always value a positive experience more than the money they’ll save. “It’s more than I can afford” could mean they can’t afford what you offer at all, but it could also mean they can’t afford it right now or all at once. So ask if they could afford what you’re asking if they spread payment over a longer period? Ask, “Would that mean you’ll give me the business?” It’s possible they’re asking for too much for what they can afford to pay. Will they compromise? Can you reach an agreement on something less? Ask, “If we can agree what’s in and what’s out, will you give me the business?” What can they afford? When you know, you’re in a good position to decide if the deal makes good business sense, for both of you. Ask, “If I can match that price, will you give me the business?” There’s no such thing as “a bit rich”, only ways for you to get to know your customer better. There’s no such thing as “over-priced”, only opportunities for you to explain yourself better. You should relish those opportunities. There’s really also no such thing as “too expensive”, just ways to set the tone for your relationship and help you develop that relationship to a point where money will be less of a deciding factor. Unfinished Business Join me and my co-host Anna Debenham next year for Unfinished Business, a new discussion show about the business end of working in web, design and creative industries. 2012 Andy Clarke andyclarke 2012-12-23T00:00:00+00:00 https://24ways.org/2012/monkey-business/ business
105 Contract Killer When times get tough, it can often feel like there are no good people left in the world, only people who haven’t yet turned bad. These bad people will go back on their word, welch on a deal, put themselves first. You owe it to yourself to stay on top. You owe it to yourself to ensure that no matter how bad things get, you’ll come away clean. You owe it yourself and your business not to be the guy lying bleeding in an alley with a slug in your gut. But you’re a professional, right? Nothing bad is going to happen to you. You’re a good guy. You do good work for good people. Think again chump. Maybe you’re a gun for hire, a one man army with your back to the wall and nothing standing between you and the line at a soup kitchen but your wits. Maybe you work for the agency, or like me you run one of your own. Either way, when times get tough and people get nasty, you’ll need more than a killer smile to save you. You’ll need a killer contract too. It was exactly ten years ago today that I first opened my doors for business. In that time I’ve thumbed through enough contracts to fill a filing cabinet. I’ve signed more contracts than I can remember, many so complicated that I should have hired a lawyer (or detective) to make sense of their complicated jargon and solve their cross-reference puzzles. These documents had not been written to be understood on first reading but to spin me around enough times so as to give the other player the upper-hand. If signing a contract I didn’t fully understand made me a stupid son-of-a-bitch, not asking my customers to sign one just makes me plain dumb. I’ve not always been so careful about asking my customers to sign contracts with me as I am now. Somehow in the past I felt that insisting on a contract went against the friendly, trusting relationship that I like to build with my customers. Most of the time the game went my way. On rare the occasions when a fight broke out, I ended up bruised and bloodied. I learned that asking my customers to sign a contract matters to both sides, but what also matters to me is that these contracts should be more meaningful, understandable and less complicated than any of those that I have ever autographed. Writing a killer contract If you are writing a contract between you and your customers it doesn’t have to conform to the seemingly standard format of jargon and complicated legalese. You can be creative. A killer contract will clarify what is expected of both sides and it can also help you to communicate your approach to doing business. It will back-up your brand values and help you to build a great relationship between you and your customers. In other words, a creative contract can be a killer contract. Your killer contract should cover: A simple overview of who is hiring who, what they are being hired to do, when and for how much What both parties agree to do and what their respective responsibilities are The specifics of the deal and what is or isn’t included in the scope What happens when people change their minds (as they almost always do) A simple overview of liabilities and other legal matters You might even include a few jokes To help you along, I will illustrate those bullet points by pointing both barrels at the contract that I wrote and have been using at Stiffs & Nonsense for the past year. My contract has been worth its weight in lead and you are welcome to take all or any part of it to use for yourself. It’s packing a creative-commons attribution share-a-like license. That means you are free to re-distribute it, translate it and otherwise re-use it in ways I never considered. In return I only ask you mention my name and link back to this article. As I am only an amateur detective, you should have it examined thoroughly by your own, trusted legal people if you use it. NB: The specific details of this killer contract work for me and my customers. That doesn’t mean that they will work for you and yours. The ways that I handle design revisions, testing, copyright ownership and other specifics are not the main focus of this article. That you handle each of them carefully when you write your own killer contract is. Kiss Me, Deadly Setting a tone and laying foundations for agreement The first few paragraphs of a killer contract are the most important. Just like a well thought-out web page, these first few words should be simple, concise and include the key points in your contract. As this is the part of the contract that people absorb most easily, it is important that you make it count. Start by setting the overall tone and explaining how your killer contract is structured and why it is different. We will always do our best to fulfill your needs and meet your goals, but sometimes it is best to have a few simple things written down so that we both know what is what, who should do what and what happens if stuff goes wrong. In this contract you won’t find complicated legal terms or large passages of unreadable text. We have no desire to trick you into signing something that you might later regret. We do want what’s best for the safety of both parties, now and in the future. In short You ([customer name]) are hiring us ([company name]) located at [address] to [design and develop a web site] for the estimated total price of [total] as outlined in our previous correspondence. Of course it’s a little more complicated, but we’ll get to that. The Big Kill What both parties agree to do Have you ever done work on a project in good faith for a junior member of a customer’s team, only to find out later that their spending hadn’t been authorized? To make damn sure that does not happen to you, you should ask your customer to confirm that not only are they authorized to enter into your contract but that they will fulfill all of their obligations to help you meet yours. This will help you to avoid any gunfire if, as deadline day approaches, you have fulfilled your side of the bargain but your customer has not come up with the goods. As our customer, you have the power and ability to enter into this contract on behalf of your company or organization. You agree to provide us with everything that we need to complete the project including text, images and other information as and when we need it, and in the format that we ask for. You agree to review our work, provide feedback and sign-off approval in a timely manner too. Deadlines work two ways and you will also be bound by any dates that we set together. You also agree to stick to the payment schedule set out at the end of this contract. We have the experience and ability to perform the services you need from us and we will carry them out in a professional and timely manner. Along the way we will endeavor to meet all the deadlines set but we can’t be responsible for a missed launch date or a deadline if you have been late in supplying materials or have not approved or signed off our work on-time at any stage. On top of this we will also maintain the confidentiality of any information that you give us. My Gun Is Quick Getting down to the nitty gritty What appear at first to be a straight-forward projects can sometimes turn long and complicated and unless you play it straight from the beginning your relationship with your customer can suffer under the strain. Customers do, and should have the opportunity to, change their minds and give you new assignments. After-all, projects should be flexible and few customers know from the get-go exactly what they want to see. If you handle this well from the beginning you will help to keep yourself and your customers from becoming frustrated. You will also help yourself to dodge bullets in the event of a fire-fight. We will create designs for the look-and-feel, layout and functionality of your web site. This contract includes one main design plus the opportunity for you to make up to two rounds of revisions. If you’re not happy with the designs at this stage, you will pay us in full for all of the work that we have produced until that point and you may either cancel this contract or continue to commission us to make further design revisions at the daily rate set out in our original estimate. We know from plenty of experience that fixed-price contracts are rarely beneficial to you, as they often limit you to your first idea about how something should look, or how it might work. We don’t want to limit either your options or your opportunities to change your mind. The estimate/quotation prices at the beginning of this document are based on the number of days that we estimate we’ll need to accomplish everything that you have told us you want to achieve. If you do want to change your mind, add extra pages or templates or even add new functionality, that won’t be a problem. You will be charged the daily rate set out in the estimate we gave you. Along the way we might ask you to put requests in writing so we can keep track of changes. As I like to push my luck when it comes to CSS, it never hurts to head off the potential issue of progressive enrichment right from the start. You should do this too. But don’t forget that when it comes to technical matters your customers may have different expectations or understanding, so be clear about what you will and won’t do. If the project includes XHTML or HTML markup and CSS templates, we will develop these using valid XHTML 1.0 Strict markup and CSS2.1 + 3 for styling. We will test all our markup and CSS in current versions of all major browsers including those made by Apple, Microsoft, Mozilla and Opera. We will also test to ensure that pages will display visually in a ‘similar’, albeit not necessarily an identical way, in Microsoft Internet Explorer 6 for Windows as this browser is now past it’s sell-by date. We will not test these templates in old or abandoned browsers, for example Microsoft Internet Explorer 5 or 5.5 for Windows or Mac, previous versions of Apple’s Safari, Mozilla Firefox or Opera unless otherwise specified. If you need to show the same or similar visual design to visitors using these older browsers, we will charge you at the daily rate set out in our original estimate for any necessary additional code and its testing. The Twisted Thing It is not unheard of for customers to pass off stolen goods as their own. If this happens, make sure that you are not the one left holding the baby. You should also make it clear who owns the work that you make as customers often believe that because they pay for your time, that they own everything that you produce. Copyrights You guarantee to us that any elements of text, graphics, photos, designs, trademarks, or other artwork that you provide us for inclusion in the web site are either owned by your good selfs, or that you have permission to use them. When we receive your final payment, copyright is automatically assigned as follows: You own the graphics and other visual elements that we create for you for this project. We will give you a copy of all files and you should store them really safely as we are not required to keep them or provide any native source files that we used in making them. You also own text content, photographs and other data you provided, unless someone else owns them. We own the XHTML markup, CSS and other code and we license it to you for use on only this project. Vengeance Is Mine! The fine print Unless your work is pro-bono, you should make sure that your customers keep you in shoe leather. It is important that your customers know from the outset that they must pay you on time if they want to stay on good terms. We are sure you understand how important it is as a small business that you pay the invoices that we send you promptly. As we’re also sure you’ll want to stay friends, you agree to stick tight to the following payment schedule. [Payment schedule] No killer contract would be complete without you making sure that you are watching your own back. Before you ask your customers to sign, make it clear-cut what your obligations are and what will happen if any part of your killer contract finds itself laying face down in the dirt. We can’t guarantee that the functions contained in any web page templates or in a completed web site will always be error-free and so we can’t be liable to you or any third party for damages, including lost profits, lost savings or other incidental, consequential or special damages arising out of the operation of or inability to operate this web site and any other web pages, even if you have advised us of the possibilities of such damages. Just like a parking ticket, you cannot transfer this contract to anyone else without our permission. This contract stays in place and need not be renewed. If any provision of this agreement shall be unlawful, void, or for any reason unenforceable, then that provision shall be deemed severable from this agreement and shall not affect the validity and enforceability of any remaining provisions. Phew. Although the language is simple, the intentions are serious and this contract is a legal document under exclusive jurisdiction of [English] courts. Oh and don’t forget those men with big dogs. Survival… Zero! Take it from me, packing a killer contract will help to keep you safe when times get tough, but you must still keep your wits about you and stay on the right side of the law. Don’t be a turkey this Christmas. Be a contract killer. Update, May 2010: For a follow-on to this article see Contract Killer: The Next Hit 2008 Andy Clarke andyclarke 2008-12-23T00:00:00+00:00 https://24ways.org/2008/contract-killer/ business
122 A Message To You, Rudy - CSS Production Notes When more than one designer or developer work together on coding an XHTML/CSS template, there are several ways to make collaboration effective. Some prefer to comment their code, leaving a trail of bread-crumbs for their co-workers to follow. Others use accompanying files that contain their working notes or communicate via Basecamp. For this year’s 24ways I wanted to share a technique that I has been effective at Stuff and Nonsense; one that unfortunately did not make it into the final draft of Transcending CSS. This technique, CSS production notes, places your page production notes in one convenient place within an XHTML document and uses nothing more than meaningful markup and CSS. Let’s start with the basics; a conversation between a group of people. In the absence of notes or conversation elements in XHTML you need to make an XHTML compound that will effectively add meaning to the conversation between designers and developers. As each person speaks, you have two elements right there to describe what has been said and who has spoken: <blockquote> and its cite attribute. <blockquote cite="andy"> <p>This project will use XHTML1.0 Strict, CSS2.1 and all that malarkey.</p> </blockquote> With more than one person speaking, you need to establish a temporal order for the conversation. Once again, the element to do just that is already there in XHTML; the humble ordered list. <ol id="notes"> <li> <blockquote cite="andy"> <p>This project will use XHTML1.0 Strict, CSS2.1 and all that malarkey.</p> </blockquote> </li> <li> <blockquote cite="dan"> <p>Those bits are simple and bulletproof.</p> </blockquote> </li> </ol> Adding a new note is as simple as adding a new item to list, and if you prefer to add more information to each note, such as the date or time that the note was written, go right ahead. Place your note list at the bottom of the source order of your document, right before the closing <body> tag. One advantage of this approach over using conventional comments in your code is that all the notes are unobtrusive and are grouped together in one place, rather than being spread throughout your document. Basic CSS styling For the first stage you are going to add some basic styling to the notes area, starting with the ordered list. For this design I am basing the look and feel on an instant messenger window. ol#notes { width : 300px; height : 320px; padding : .5em 0; background : url(im.png) repeat; border : 1px solid #333; border-bottom-width : 2px; -moz-border-radius : 6px; /* Will not validate */ color : #000; overflow : auto; } ol#notes li { margin : .5em; padding : 10px 0 5px; background-color : #fff; border : 1px solid #666; -moz-border-radius : 6px; /* Will not validate */ } ol#notes blockquote { margin : 0; padding : 0; } ol#notes p { margin : 0 20px .75em; padding : 0; } ol#notes p.date { font-size : 92%; color : #666; text-transform : uppercase; } Take a gander at the first example. You could stop right there, but without seeing who has left the note, there is little context. So next, extract the name of the commenter from the <blockquote>’s cite attribute and display it before each note by using generated content. ol#notes blockquote:before { content : " "attr(cite)" said: "; margin-left : 20px; font-weight : bold; } Fun with more detailed styling Now, with all of the information and basic styling in place, it’s time to have some fun with some more detailed styling to spruce up your notes. Let’s start by adding an icon for each person, once again based on their cite. First, all of the first paragraphs of a <blockquote>’s that includes a cite attribute are given common styles. ol#notes blockquote[cite] p:first-child { min-height : 34px; padding-left : 40px; } Followed by an individual background-image. ol#notes blockquote[cite="Andy"] p:first-child { background : url(malarkey.png) no-repeat 5px 5px; } If you prefer a little more interactivity, add a :hover state to each <blockquote> and perhaps highlight the most recent comment. ol#notes blockquote:hover { background-color : #faf8eb; border-top : 1px solid #fff; border-bottom : 1px solid #333; } ol#notes li:last-child blockquote { background-color : #f1efe2; } You could also adjust the style for each comment based on the department that the person works in, for example: <li> <blockquote cite="andy" class="designer"> <p>This project will use XHTML1.0 Strict, CSS2.1 and all that malarkey.</p> </blockquote> </li> <li> <blockquote cite="dan"> <p>Those bits are simple and bulletproof.</p> </blockquote> </li> ol#notes blockquote.designer { border-color : #600; } Take a look at the results of the second stage. Show and hide the notes using CSS positioning With your notes now dressed in their finest, it is time to tuck them away above the top of your working XHTML/CSS prototype so that you can reveal them when you need them, no JavaScript required. Start by moving the ordered list of notes off the top of the viewport leaving only a few pixels in view. It is also a good idea to make them semi-transparent by using the opacity property for browsers that have implemented it. ol#notes { position : absolute; opacity : .25; z-index : 2000; top : -305px; left : 20px; } Your last step is to add :hover and :focus dynamic pseudo-classes to reposition the list at the top of the viewport and restore full opacity to display them in their full glory when needed. ol#notes:hover, ol#notes:focus { top : 0; opacity : 1; } Now it’s time to sit back, pour yourself a long drink and bask in the glory of the final result. Your notes are all stored in one handy place at the bottom of your document rather than being spread around your code. When your templates are complete, simply dive straight to the bottom and pull out the notes. A Message To You, Rudy Thank-you to everybody for making this a really great year for web standards. Have a wonderful holiday season. Buy Andy Clarke’s book Transcending CSS from Amazon.com 2006 Andy Clarke andyclarke 2006-12-15T00:00:00+00:00 https://24ways.org/2006/css-production-notes/ process
149 Underpants Over My Trousers With Christmas approaching faster than a speeding bullet, this is the perfect time for you to think about that last minute present to buy for the web geek in your life. If you’re stuck for ideas for that special someone, forget about that svelte iPhone case carved from solid mahogany and head instead to your nearest comic-book shop and pick up a selection of comics or graphic novels. (I’ll be using some of my personal favourite comic books as examples throughout). Trust me, whether your nearest and dearest has been reading comics for a while or has never peered inside this four-colour world, they’ll thank-you for it. Aside from indulging their superhero fantasies, comic books can provide web designers with a rich vein of inspiring ideas and material to help them create shirt button popping, trouser bursting work for the web. I know from my own personal experience, that looking at aspects of comic book design, layout and conventions and thinking about the ways that they can inform web design has taken my design work in often-unexpected directions. There are far too many fascinating facets of comic book design that provide web designers with inspiration to cover in the time that it takes to pull your underpants over your trousers. So I’m going to concentrate on one muscle bound aspect of comic design, one that will make you think differently about how you lay out the content of your pages in panels. A suitcase full of Kryptonite Now, to the uninitiated onlooker, the panels of a comic book may appear to perform a similar function to still frames from a movie. But inside the pages of a comic, panels must work harder to help the reader understand the timing of a story. It is this method for conveying narrative timing to a reader that I believe can be highly useful to designers who work on the web as timing, drama and suspense are as important in the web world as they are in worlds occupied by costumed crime fighters and superheroes. I’d like you to start by closing your eyes and thinking about your own process for laying out panels of content on a page. OK, you’ll actually be better off with your eyes open if you’re going to carry on reading. I’ll bet you a suitcase full of Kryptonite that you often, if not always, structure your page layouts, and decide on the dimensions of those panels according to either: The base grid that you are working to The Golden Ratio or another mathematical schema More likely, I bet that you decide on the size and the number of your panels based on the amount of content that will be going into them. From today, I’d like you to think about taking a different approach. This approach not only addresses horizontal and vertical space, but also adds the dimension of time to your designs. Slowing down the action A comic book panel not only acts as a container for its content but also indicates to a reader how much time passes within the panel and as a result, how much time the reader should focus their attention on that one panel. Smaller panels create swift eye movement and shorter bursts of attention. Larger panels give the perception of more time elapsing in the story and subconsciously demands that a reader devotes more time to focus on it. Concrete by Paul Chadwick (Dark Horse Comics) This use of panel dimensions to control timing can also be useful for web designers in designing the reading/user experience. Imagine a page full of information about a product or service. You’ll naturally want the reader to focus for longer on the key benefits of your offering rather than perhaps its technical specifications. Now take a look at this spread of pages from Watchmen by Alan Moore and Dave Gibbons. Watchmen by Alan Moore and Dave Gibbons (Diamond Comic Distributors 2004) Throughout this series of (originally) twelve editions, artist Dave Gibbons stuck rigidly to his 3×3 panels per page design and deviated from it only for dramatic moments within the narrative. In particular during the last few pages of chapter eleven, Gibbons adds weight to the impending doom by slowing down the action by using larger panels and forces the reader to think longer about what was coming next. The action then speeds up through twelve smaller panels until the final panel: nothing more than white space and yet one of the most iconic and thought provoking in the entire twelve book series. Watchmen by Alan Moore and Dave Gibbons (Diamond Comic Distributors 2004) On the web it is common for clients to ask designers to fill every pixel of screen space with content, perhaps not understanding the drama that can be added by nothing more than white space. In the final chapter, Gibbons emphasises the carnage that has taken place (unseen between chapters eleven and twelve) by presenting the reader with six full pages containing only single, large panels. Watchmen by Alan Moore and Dave Gibbons (Diamond Comic Distributors 2004) This drama, created by the artist’s use of panel dimensions to control timing, is a technique that web designers can also usefully employ when emphasising important areas of content. Think back for a moment to the home page of Apple Inc., during the launch of their iconic iPhone, where the page contained nothing more than a large image and the phrase “Say hello to iPhone”. Rather than fill the page with sales messages, Apple’s designers allowed the space itself to tell the story and created a real sense of suspense and expectation among their readers. Borders Whereas on the web, panel borders are commonly used to add emphasis to particular areas of content, in comic books they take on a different and sometimes opposite role. In the examples so far, borders have contained all of the action. Removing a border can have the opposite effect to what you might at first think. Rather than taking emphasis away from their content, in comics, borderless panels allow the reader’s eyes to linger for longer on the content adding even stronger emphasis. Concrete by Paul Chadwick (Dark Horse Comics) This effect is amplified when the borderless content is allowed to bleed to the edges of a page. Because the content is no longer confined, except by the edges of the page (both comic and web) the reader’s eye is left to wander out into open space. Concrete by Paul Chadwick (Dark Horse Comics) This type of open, borderless content panel can be highly useful in placing emphasis on the most important content on a page in exactly the very opposite way that we commonly employ on the web today. So why is time an important dimension to think about when designing your web pages? On one level, we are often already concerned with the short attention spans of visitors to our pages and should work hard to allow them to quickly and easily find and read the content that both they and we think is important. Learning lessons from comic book timing can only help us improve that experience. On another: timing, suspense and drama are already everyday parts of the web browsing experience. Will a reader see what they expect when they click from one page to the next? Or are they in for a surprise? Most importantly, I believe that the web, like comics, is about story telling: often the story of the experiences that a customer will have when they use our product or service or interact with our organisation. It is this element of story telling than can be greatly improved by learning from comics. It is exactly this kind of learning and adapting from older, more established and at first glance unrelated media that you will find can make a real distinctive difference to the design work that you create. Fill your stockings If you’re a visual designer or developer and are not a regular reader of comics, from the moment that you pick up your first title, I know that you will find them inspiring. I will be writing more, and speaking about comic design applied to the web at several (to be announced) events this coming year. I hope you’ll be slipping your underpants over your trousers and joining me then. In the meantime, here is some further reading to pick up on your next visit to a comic book or regular bookshop and slip into your stockings: Comics and Sequential Art by Will Eisner (Northern Light Books 2001) Understanding Comics: The Invisible Art by Scott McCloud (Harper Collins 1994) Have a happy superhero season. (I would like to thank all of the talented artists, writers and publishers whose work I have used as examples in this article and the hundreds more who inspire me every day with their tall tales and talent.) 2007 Andy Clarke andyclarke 2007-12-14T00:00:00+00:00 https://24ways.org/2007/underpants-over-my-trousers/ design
189 Ignorance Is Bliss This is a true story. Meet Mike Mike’s a smart guy. He knows a great browser when he sees one. He uses Firefox on his Windows PC at work and Safari on his Mac at home. Mike asked us to design a Web site for his business. So we did. We wanted to make the best Web site for Mike that we could, so we used all of the CSS tools that are available today. That meant using RGBa colour to layer elements, border-radius to add subtle rounded corners and (possibly most experimental of all new CSS), generated gradients. The home page Mike sees in Safari on his Mac Mike loves what he sees. Meet Sam Sam works with Mike. She uses Internet Explorer 7 because it came on the Windows laptop that the company bought her when she joined. The home page Sam sees in Internet Explorer 7 on her PC Sam loves the new Web site too. How could both of them be happy when they experienced the Web site differently? The new WYSIWYG When I first presented my designs to Mike and Sam, I showed them a Web page made with HTML and CSS in their respective browsers and not a picture of a Web page. By showing neither a static image of my design, I set none of the false expectations that, by definition, a static Photoshop or Fireworks visual would have established. Mike saw rounded corners and subtle shadows in Firefox and Safari. Sam saw something equally as nice, just a little different, in Internet Explorer. Both were very happy because they saw something that they liked. Neither knew, or needed to know, about the subtle differences between browsers. Their users don’t need to know either. That’s because in the real world, people using the Web don’t find a Web site that they like, then open up another browser to check that it looks they same. They simply buy what they came to buy, read what what they came to read, do what they came to do, then get on with their lives in blissful ignorance of what they might be seeing in another browser. Often when I talk or write about using progressive CSS, people ask me, “How do you convince clients to let you work that way? What’s your secret?” Secret? I tell them what they need to know, on a need-to-know basis. Epilogue Sam has a new iPhone that Mike bought for her as a reward for achieving her sales targets. She loves her iPhone and was surprised at just how fast and good-looking the company Web site appears on that. So she asked, “Andy, I didn’t know you optimised our site for mobile. I don’t remember seeing an invoice for that.” I smiled. “That one was on the house.” 2009 Andy Clarke andyclarke 2009-12-23T00:00:00+00:00 https://24ways.org/2009/ignorance-is-bliss/ business
206 Getting Hardboiled with CSS Custom Properties Custom Properties are a fabulous new feature of CSS and have widespread support in contemporary browsers. But how do we handle browsers without support for CSS Custom Properties? Do we wait until those browsers are lying dead in a ditch before we use them? Do we tool up and prop up our CSS using a post-processor? Or do we get tough? Do we get hardboiled? Previously only pre-processing tools like LESS and Sass enabled developers to use variables in their stylesheets, but now Custom Properties have brought variables natively to CSS. How do you write a custom property? It’s hardly a mystery. Simply add two dashes to the start of a style rule. Like this: --color-text-default : black; If you’re more the underscore type, try this: --color_text_default : black; Hyphens or underscores are allowed in property names, but don’t be a chump and try to use spaces. Custom property names are also case-sensitive, so --color-text-default and --Color_Text_Default are two distinct properties. To use a custom property in your style rules, var() tells a browser to retrieve the value of a property. In the next example, the browser retrieves the black colour from the color-text-default variable and applies it to the body element: body { color : var(--color-text-default); } Like variables in LESS or Sass, CSS Custom Properties mean you don’t have to be a dumb mug and repeat colour, font, or size values multiple times in your stylesheets. Unlike a preprocessor variable though, CSS Custom Properties use the cascade, can be modified by media queries and other state changes, and can also be manipulated by Javascript. (Serg Hospodarets wrote a fabulous primer on CSS Custom Properties where he dives deeper into the code and possible applications.) Browser support Now it’s about this time that people normally mention browser support. So what about support for CSS Custom Properties? Current versions of Chrome, Edge, Firefox, Opera, and Safari are all good. Internet Explorer 11 and before? Opera Mini? Nasty. Sound familiar? Can I Use css-variables? Data on support for the css-variables feature across the major browsers from caniuse.com. Not to worry, we can manually check for Custom Property support in a browser by using an @support directive, like this: --color-text-default : black; body { color : black; } @supports ((--foo : bar)) { body { color : var(--color-text-default); } } In that example we first set body element text to black and then override that colour with a Custom Property value when the browser supports our fictitious foo bar variable. Substitutions If we reference a variable that hasn’t been defined, that won’t be a problem as browsers are smart enough to ignore the value altogether. If we need a cast iron alibi, use substitutions to specify a fall-back value. body { color : var(--color-text-default, black); } Substitutions are similar to font stacks in that they contain a comma separated list of values. If there’s no value associated with a property, a browser will ignore it and move onto the next value in the list. Post-processing Of course we could use a post-processor plugin to turn Custom Properties into plain CSS, but hang on one goddam minute kiddo. Haven’t we been down this road before? Didn’t we engineer elaborate workarounds to enable us to use ‘advanced’ CSS3 properties like border-radius, CSS columns, and Flexbox in the past? We did what we had to do to get the job done, but came away feeling dirty. I think there’s a better way, one that allows us to enjoy the benefits of CSS Custom Properties in browsers that support them, while providing an appropriate, but not identical experience, for people who use less capable browsers. Guess what, I’ve been down this road before too. 2Tone Stuff & Nonsense When Internet Explorer 6 was the big dumb browser everyone hated, I served two different designs on my website. For the modern browsers of the time, mod arrows and targets were everywhere in glorious red, white, and blue, and I implemented all of them using CSS attribute selectors which were considered advanced at the time: [class="banner"] { background-colour : red; } Internet Explorer 6 ignored any selectors it didn’t understand, so people using that browser saw a simpler black and white, 2Tone-based design that I’d implemented for them using class selectors: .banner { background-colour : black; } [class="banner"] { background-colour : red; } You don’t have to be a detective to find out that most people thought I’d lost my wits, but Microsoft even used my website as a reference when they tested attribute selectors in Internet Explorer 7. They did, as I suggested, “Stomp to da betta browser.” Dumb browsers look the other way So how does this approach relate to tackling any lack of support for CSS Custom Properties? How can we take advantage of them without worrying about browsers with no support and having to implement complex workarounds, or spending hours specifying fallbacks that perfectly match our designs? Turns out, the answer is built into CSS, and always has been, because when browsers don’t know what they’re looking at, they look away. All we have to do is first specify values for a simpler design first, and then follow that up with the values in our CSS Custom Properties: body { color : black; color : var(--color-text-default, black); } All browsers understand the first value (black,) and if they‘re smart enough to understand the second (var(--color-text-default)), they’ll use it and override the first. If they’re too damn stupid to understand the custom property value, they’ll ignore it. Nobody dies. Repeat this for every style that contains a variable, baking an alternative, perhaps simpler design into your stylesheets for people who use less capable browsers, just like I did with Stuff & Nonsense. Conclusion I doubt that anyone agrees with presenting a design that looks broken or unloved—and I’m not advocating for that—but websites need not look the same in every browser. We can use substitutions to present a simpler design to people using less capable browsers. The decision when to start using new CSS properties isn‘t always a technical one. Sometimes a change in attitude about browser support is all that’s required. So get tough with dumb browsers and benefit from all the advantages that CSS Custom Properties offer. Get hardboiled. Resources: It’s Time To Start Using CSS Custom Properties—Smashing Magazine Using CSS variables correctly—Mike Riethmuller Developing Inspired Guides with CSS Custom Properties (variables)—Andy Clarke 2017 Andy Clarke andyclarke 2017-12-13T00:00:00+00:00 https://24ways.org/2017/getting-hardboiled-with-css-custom-properties/ code
237 Circles of Confusion Long before I worked on the web, I specialised in training photographers how to use large format, 5×4″ and 10×8″ view cameras – film cameras with swing and tilt movements, bellows and upside down, back to front images viewed on dim, ground glass screens. It’s been fifteen years since I clicked a shutter on a view camera, but some things have stayed with me from those years. In photography, even the best lenses don’t focus light onto a point (infinitely small in size) but onto ‘spots’ or circles in the ‘film/image plane’. These circles of light have dimensions, despite being microscopically small. They’re known as ‘circles of confusion’. As circles of light become larger, the more unsharp parts of a photograph appear. On the flip side, when circles are smaller, an image looks sharper and more in focus. This is the basis for photographic depth of field and with that comes the knowledge that no photograph can be perfectly focused, never truly sharp. Instead, photographs can only be ‘acceptably unsharp’. Acceptable unsharpness is now a concept that’s relevant to the work we make for the web, because often – unless we compromise – websites cannot look or be experienced exactly the same across browsers, devices or platforms. Accepting that fact, and learning to look upon these natural differences as creative opportunities instead of imperfections, can be tough. Deciding which aspects of a design must remain consistent and, therefore, possibly require more time, effort or compromises can be tougher. Circles of confusion can help us, our bosses and our customers make better, more informed decisions. Acceptable unsharpness Many clients still demand that every aspect of a design should be ‘sharp’ – that every user must see rounded boxes, gradients and shadows – without regard for the implications. I believe that this stems largely from the fact that they have previously been shown designs – and asked for sign-off – using static images. It’s also true that in the past, organisations have invested heavily in style guides which, while maybe still useful in offline media, have a strictness that often fails to allow for the flexibility that we need to create experiences that are appropriate to a user’s browser or device capabilities. We live in an era where web browsers and devices have wide-ranging capabilities, and websites can rarely look or be experienced exactly the same across them. Is a particular typeface vital to a user’s experience of a brand? How important are gradients or shadows? Are rounded corners really that necessary? These decisions determine how ‘sharp’ an element should be across browsers with different capabilities and, therefore, how much time, effort or extra code and images we devote to achieving consistency between them. To help our clients make those decisions, we can use circles of confusion. Circles of confusion Using circles of confusion involves plotting aspects of a visual design into a series of concentric circles, starting at the centre with elements that demand the most consistency. Then, work outwards, placing elements in order of their priority so that they become progressively ‘softer’, more defocused as they’re plotted into outer rings. If layout and typography must remain consistent, place them in the centre circle as they’re aspects of a design that must remain ‘sharp’. When gradients are important – but not vital – to a user’s experience of a brand, plot them close to, but not in the centre. This makes everyone aware that to achieve consistency, you’ll need to carve out extra images for browsers that don’t support CSS gradients. If achieving rounded corners or shadows in all browsers isn’t important, place them into outer circles, allowing you to save time by not creating images or employing JavaScript workarounds. I’ve found plotting aspects of a visual design into circles of confusion is a useful technique when explaining the natural differences between browsers to clients. It sets more realistic expectations and creates an environment for more meaningful discussions about progressive and emerging technologies. Best of all, it enables everyone to make better and more informed decisions about design implementation priorities. Involving clients allows the implications of the decisions they make more transparent. For me, this has sometimes meant shifting deadlines or it has allowed me to more easily justify an increase in fees. Most important of all, circles of confusion have helped the people that I work with move beyond yesterday’s one-size-fits-all thinking about visual design, towards accepting the rich diversity of today’s web. 2010 Andy Clarke andyclarke 2010-12-23T00:00:00+00:00 https://24ways.org/2010/circles-of-confusion/ process
246 Designing Your Site Like It’s 1998 It’s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary—and my fourteenth contribution to 24 ways— I’d like to explain how I would’ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films. My design for Planes, Trains and Automobiles is fixed at 800px wide. Developing a <frameset> framework I’ll start by using frames to set up the framework for this new website. Frames are individual pages—one for navigation, the other for my content—pulled together to form a frameset. Space is limited on lower-resolution screens, so by using frames I can ensure my navigation always remains visible. I can include any number of frames inside a <frameset> element. I add two rows to my <frameset>; the first is for my navigation and is 50px tall, the second is for my content and will resize to fill any available space. As I don’t want frame borders or any space between my frames, I set frameborder and framespacing attributes to 0: <frameset frameborder="0" framespacing="0" rows="50,*"> […] </frameset> Next I add the source of my two frame documents. I don’t want people to be able to resize or scroll my navigation, so I add the noresize attribute to that frame: <frameset frameborder="0" framespacing="0" rows="50,*"> <frame noresize scrolling="no" src="nav.html"> <frame src="content.html"> </frameset> I do want links from my navigation to open in the content frame, so I give each <frame> a name so I can specify where I want links to open: <frameset frameborder="0" framespacing="0" rows="50,*"> <frame name="navigation" noresize scrolling="no" src="nav.html"> <frame name="content" src="content.html"> </frameset> The framework for this website is simple as it contains only two horizontal rows. Should I need a more complex layout, I can nest as many framesets—and as many individual documents—as I need: <frameset rows="50,*"> <frame name="navigation"> <frameset cols="25%,*"> <frame name="sidebar"> <frame name="content"> </frameset> </frameset> Letterbox framesets were common way to deal with multiple screen sizes. In a letterbox, the central frameset had a fixed height and width, while the frames on the top, right, bottom, and left expanded to fill any remaining space. Handling older browsers Sadly not every browser supports frames, so I should send a helpful message to people who use older browsers asking them to upgrade. Happily, I can do that using noframes content: <noframes> <body> <p>This page uses frames, but your browser doesn’t support them. Please upgrade your browser.</p> </body> </noframes> Forcing someone back into a frame Sometimes, someone may follow a link to a page from a portal or search engine, or they might attempt to open it in a new window or tab. If that page properly belongs inside a <frameset>, people could easily miss out on other parts of a design. This short script will prevent this happening and because it’s vanilla Javascript, it doesn’t require a library such as jQuery: <script type="text/javascript"> if (top == self) { location = 'frameset.html'; } </script> Laying out my page Before starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website: <body background="img/container.jpg" bgcolor="#fef7fb" link="#245eab" alink="#245eab" vlink="#3c146e" text="#000000"> I want absolute control over how people experience my design and don’t want to allow it to stretch, so I first need a <table> which limits the width of my layout to 800px. The align attribute will keep this <table> in the centre of someone’s screen: <table width="800" align="center"> <tr> <td>[…]</td> </tr> </table> Although they were developed for displaying tabular information, the cells and rows which make up the <table> element make it ideal for the precise implementation of a design. I need several tables—often nested inside each other—to implement my design. These include tables for a banner and three rows of content: <table width="800" align="center"> <table>[…]</table> <table> <table> <table>[…]</table> </table> </table> <table>[…]</table> <table>[…]</table> </table> The width of the first table—used for my banner—is fixed to match the logo it contains. As I don’t need borders, padding, or spacing between these cells, I use attributes to remove them: <table border="0" cellpadding="0" cellspacing="0" width="587" align="center"> <tr> <td><img src="logo.gif" border="0" width="587" alt="Logo"></td> </tr> </table> The next table—which contains the largest image, introduction, and a call-to-action—is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images: <tr> <td><img src="spacer.gif" width="593" height="1"></td> <td><img src="spacer.gif" width="207" height="1"></td> </tr> The height and width of these “shims” or “spacers” is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development. For the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images: <tr> <td background="slice-1.jpg" height="473"> </td> <td background="slice-2.jpg" height="473">[…]</td> </tr> <tr> <td background="slice-3.jpg" height="388"> </td> </tr> I use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table—this time with a white background—inside it: <table border="0" cellpadding="1" cellspacing="0"> <tr> <td> <table bgcolor="#ffffff" border="0" cellpadding="0" cellspacing="0"> […] </table> </td> </tr> </table> Adding details Tables are fabulous tools for laying out a page, but they’re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my “Buy the DVD” call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button: <table border="0" cellpadding="0" cellspacing="0"> <tr> <td background="btn-1.jpg" border="0" height="48" width="30"><img src="spacer.gif" width="30" height="1"></td> <td background="btn-2.jpg" border="0" height="48"> <center> <a href="" target="_blank"><b>Buy the DVD</b></a> </center> </td> <td background="btn-3.jpg" border="0" height="48" width="30"><img src="spacer.gif" width="30" height="1"></td> </tr> </table> I use those same elements to add details to headlines and lists too. Adding a “bullet” to each item in a list needs only two additional table cells, a circular graphic, and a spacer: <table border="0" cellpadding="0" cellspacing="0"> <tr> <td width="10"><img src="li.gif" border="0" width="8" height="8"> </td> <td><img src="spacer.gif" width="10" height="1"> </td> <td>Directed by John Hughes</td> </tr> </table> Implementing a typographic hierarchy So far I’ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use <font> elements to change the typeface from the browser’s default to any font installed on someone’s device: <font face="Arial">Planes, Trains and Automobiles is a comedy film […]</font> To adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser’s default. I use a size of 4 for this headline and 2 for the text which follows: <font face="Arial" size="4"><b>Steve Martin</b></font> <font face="Arial" size="2">An American actor, comedian, writer, producer, and musician.</font> When I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website. NB: I use as many <br> elements as needed to create space between headlines and paragraphs. View the final result (and especially the source.) My modern day design for Planes, Trains and Automobiles. I can imagine many people reading this and thinking “This is terrible advice because we don’t develop websites like this in 2018.” That’s true. We have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes: font-variant-caps: titling-caps; font-variant-ligatures: common-ligatures; font-variant-numeric: oldstyle-nums; Grid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS: body { display: grid; grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr; grid-template-rows: auto; grid-column-gap: 2vw; grid-row-gap: 1vh; } Flexbox has made it easy to develop flexible components such as navigation links: nav ul { display: flex; } nav li { flex: 1; } Just one line of CSS can create multiple columns of fluid type: main { column-width: 12em; } CSS Shapes enable text to flow around irregular shapes including polygons: [src*="main-img"] { float: left; shape-outside: polygon(…); } Today, we wouldn’t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead: .btn { background: linear-gradient(#8B1212, #DD3A3C); border-radius: 1em; box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50); } CSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we’re certain we know how to design and develop products and websites better than we did at the end of 1998. Strange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That’s why it’s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today—tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass—will be any more relevant in the future than <font> elements, frames, layout tables, and spacer images are today. I have no prediction for what the web will be like twenty years from now. However, I want to believe we’ll build on what we’ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won’t be repeated. Head over to my website if you’d like to read about how I’d implement my design for ‘Planes, Trains and Automobiles’ today. 2018 Andy Clarke andyclarke 2018-12-23T00:00:00+00:00 https://24ways.org/2018/designing-your-site-like-its-1998/ code
273 There’s No Formula for Great Designs Before he combined them with fluid images and CSS3 media queries to coin responsive design, Ethan Marcotte described fluid grids — one of the most enjoyable parts of responsive design. Enjoyable that is, if you like working with math(s). But fluid grids aren’t perfect and, unless we’re careful when applying them, they can sometimes result in a design that feels disconnected. Recapping fluid grids If you haven’t read Ethan’s Fluid Grids, now would be a good time to do that. It centres around a simple formula for converting pixel widths to percentages: (target ÷ context) × 100 = result How does that work in practice? Well, take that Fireworks or Photoshop comp you’re working on (I call them static design visuals, or just visuals.) Of course, everything on that visual — column divisions, inline images, navigation elements, everything — is measured in pixels. Now: Pick something in the visual and measure its width. That’s our target. Take that target measurement and divide it by the width of its parent (context). Multiply what you’ve got by 100 (shift two decimal places). What you’re left with is a percentage width to drop into your style sheets. For example, divide this 300px wide sidebar division by its 948px parent and then multiply by 100: your original 300px is neatly converted to 31.646%. .content-sub { width : 31.646%; /* 300px ÷ 948px = .31646 */ } That formula makes it surprisingly simple for even die-hard fixed width aficionados to convert their visuals to percentage-based, fluid layouts. It’s a handy formula for those who still design using static visuals, and downright essential for those situations where one person in an organization designs in Fireworks or Photoshop and another develops with CSS. Why? Well, although I think that designing in a browser makes the best sense — particularly when designing for multiple devices — I’ll wager most designers still make visuals in Fireworks or Photoshop and use them for demonstrations and get feedback and sign-off. That’s OK. If you haven’t made the transition to content-out designing in a browser yet, the fluid grids formula helps you carry on pushing pixels a while longer. You can carry on moving pixel width measurements from your visuals to your style sheets, too, in the same way you always have. You can be precise to the pixel and even apply a grid image as a CSS background to help you keep everything lined up perfectly. Once you’re done, and the fixed width layout in the browser matches your visual, loop back through your style sheets and convert those pixels to percentages using the fluid grids formula. With very little extra work, you’ll have a fluid implementation of your fixed width layout. The fluid grids formula is simple and incredibly effective, but not long after I started working responsively I realized that the formula shouldn’t (always) be a one-fix, set-and-forget calculation. I noticed that unless we compensate for problems it sometimes creates, the result can be a disconnected design. Staying connected Good design relies on connectedness, a feeling of natural balance between elements and the grid they’re placed on. Give an element greater prominence or position in a visual hierarchy and you can fundamentally alter the balance and sometimes the meaning of a design. Different from a browser’s page zooming feature — where images, text and overall layout change size by the same ratio — fluid grids flex a layout in response to a window or device width. Columns expand and contract, and within them fluid media (images and videos) can also change size. This can be one of the most impressive demonstrations of responsive design. But not every element within a fluid grid can change size along with the window or device width. For example, type size and leading won’t change along with a column’s width. When columns and elements within them change width, all too easily a visual hierarchy can be broken and along with it the relationship between element sizes and the outer window or viewport. This can happen quickly if you make just one set of fluid grid calculations and use those percentages across every screen width, from smartphones through tablets and up to large desktops. The answer? Make several sets of fluid grids calculations, each one at a significant window or device width breakpoint. Then apply those new percentages, when needed, to help keep elements in proportion and maintain balance and connectedness. Here’s how I work. Avoiding disconnection I’ve never been entirely happy with grid frameworks such as the 960 Grid System, so I start almost every project by creating a custom grid to inform my layout decisions. Here’s a plain version of a grid from a recent project that I’ll use as an illustration. This project’s grid comprises 84px columns and 24px gutters. This creates an odd number of columns at common tablet and desktop widths, and allows for 300px fixed width assets — useful when I need to fit advertising into a desktop layout’s sidebar. Showing common advertising sizes (Larger image) For this project I chose three 320 and Up breakpoints above 320px and, after placing as many columns as would fit those breakpoint widths, I derived three content widths: Breakpoint Columns Content width 768px 7 732px 992px 9 948px 1,382px 13 1,380px Here’s my grid again, this time with pixel measurements and breakpoints overlaid. Showing pixel measurements and breakpoints (Larger image) Now cast your mind back to the fluid grids calculation I made earlier. I divided a 300px element by 948px and arrived at 31.646%. For some elements it’s possible to use that percentage across all screen widths, but others will feel too small in relation to a narrower 768px and too large inside 1,380px. To help maintain connectedness, I make a set of fluid grids calculations based on each of the content widths I established earlier. Now I can shift an element’s percentage width up or down when I switch to a new breakpoint and content width. For example: 300px is 40.984% of 732px 300px is 31.646% of 948px 300px is 21.739% of 1,380px I’ll add all those fluid grid percentages to my grid image and save it for quick reference. Showing percentages at all breakpoints (Larger image) Then I can apply those different percentage widths to elements at each breakpoint using CSS3 media queries. For example, that sidebar division again: /* 732px, 7-column width */ @media only screen and (min-width: 768px) { .content-sub { width : 40.983%; /* 300px ÷ 732px = .40983 */ } } /* 948px, 9-column width */ @media only screen and (min-width: 992px) { .content-sub { width : 31.645%; /* 300px ÷ 948px = .31645 */ } } /* 1380px, 13-column width */ @media only screen and (min-width: 1382px) { .content-sub { width : 21.739%; /* 300px ÷ 1380px = .21739 */ } } The number of changes you make to a layout at different breakpoints will, of course, depend on the specifics of the design you’re working on. Yes, this is additional work, but the result will be a layout that feels better balanced and within which elements remain in harmony with each other while they respond to new screen or device widths. Putting the design in responsive web design Until now, many of the conversations around responsive web design have been about aspects of technical implementation, rather than design. I believe we’re only beginning to understand what’s involved in designing responsively. In future, we’ll likely be making design decisions not just about proportions but also about responsive typography. We’ll also need to learn how to adapt our designs to device characteristics such as touch targets and more. Sometimes we’ll make decisions to improve function, other times because they make a design ‘feel’ right. You’ll know when you’ve made a right decision. You’ll feel it. After all, there really is no formula for making great designs. 2011 Andy Clarke andyclarke 2011-12-23T00:00:00+00:00 https://24ways.org/2011/theres-no-formula-for-great-designs/ ux
311 Designing Imaginative Style Guides (Living) style guides and (atomic) patterns libraries are “all the rage,” as my dear old Nana would’ve said. If articles and conference talks are to be believed, making and using them has become incredibly popular. I think there are plenty of ways we can improve how style guides look and make them better at communicating design information to creatives without it getting in the way of information that technical people need. Guides to libraries of patterns Most of my consulting work and a good deal of my creative projects now involve designing style guides. I’ve amassed a huge collection of brand guidelines and identity manuals as well as, more recently, guides to libraries of patterns intended to help designers and developers make digital products and websites. Two pages from one of my Purposeful style guide packs. Designs © Stuff & Nonsense. “Style guide” is an umbrella term for several types of design documentation. Sometimes we’re referring to static style or visual identity guides, other times voice and tone. We might mean front-end code guidelines or component/pattern libraries. These all offer something different but more often than not they have something in common. They look ugly enough to have been designed by someone who enjoys configuring a router. OK, that was mean, not everyone’s going to think an unimaginative style guide design is a problem. After all, as long as a style guide contains information people need, how it looks shouldn’t matter, should it? Inspiring not encyclopaedic Well here’s the thing. Not everyone needs to take the same information away from a style guide. If you’re looking for markup and styles to code a ‘media’ component, you’re probably going to be the technical type, whereas if you need to understand the balance of sizes across a typographic hierarchy, you’re more likely to be a creative. What you need from a style guide is different. Sure, some people1 need rules: “Do this (responsive pattern)” or “don’t do that (auto-playing video.)” Those people probably also want facts: “Use this (hexadecimal value)” and not that inaccessible colour combination.” Style guides need to do more than list facts and rules. They should demonstrate a design, not just document its parts. The best style guides are inspiring not encyclopaedic. I’ll explain by showing how many style guides currently present information about colour. Colours communicate I’m sure you’ll agree that alongside typography, colour’s one of the most important ingredients in a design. Colour communicates personality, creates mood and is vital to an easily understandable interactive vocabulary. So you’d think that an average style guide would describe all this in any number of imaginative ways. Well, you’d be disappointed, because the most inspiring you’ll find looks like a collection of chips from a paint chart. Lonely Planet’s Rizzo does a great job of separating its Design Elements from UI Components, and while its ‘Click to copy’ colour values are a thoughtful touch, you’ll struggle to get a feeling for Lonely Planet’s design by looking at their colour chips. Lonely Planet’s Rizzo style guide. Lonely Planet approach is a common way to display colour information and it’s one that you’ll also find at Greenpeace, Sky, The Times and on countless more style guides. Greenpeace, Sky and The Times style guides. GOV.UK—not a website known for its creative flair—varies this approach by using circles, which I find strange as circles don’t feature anywhere else in its branding or design. On the plus side though, their designers have provided some context by categorising colours by usage such as text, links, backgrounds and more. GOV.UK style guide. Google’s Material Design offers an embarrassment of colours but most helpfully it also advises how to combine its primary and accent colours into usable palettes. Google’s Material Design. While the ability to copy colour values from a reference might be all a technical person needs, designers need to understand why particular colours were chosen as well as how to use them. Inspiration not documentation Few style guides offer any explanation and even less by way of inspiring examples. Most are extremely vague when they describe colour: “Use colour as a presentation element for either decorative purposes or to convey information.” The Government of Canada’s Web Experience Toolkit states, rather obviously. “Certain colors have inherent meaning for a large majority of users, although we recognize that cultural differences are plentiful.” Salesforce tell us, without actually mentioning any of those plentiful differences. I’m also unsure what makes the Draft U.S. Web Design Standards colours a “distinctly American palette” but it will have to work extremely hard to achieve its goal of communicating “warmth and trustworthiness” now. In Canada, “bold and vibrant” colours reflect Alberta’s “diverse landscape.” Adding more colours to their palette has made Adobe “rich, dynamic, and multi-dimensional” and at Skype, colours are “bold, colourful (obviously) and confident” although their style guide doesn’t actually provide information on how to use them. The University of Oxford, on the other hand, is much more helpful by explaining how and why to use their colours: “The (dark) Oxford blue is used primarily in general page furniture such as the backgrounds on the header and footer. This makes for a strong brand presence throughout the site. Because it features so strongly in these areas, it is not recommended to use it in large areas elsewhere. However it is used more sparingly in smaller elements such as in event date icons and search/filtering bars.” OpenTable style guide. The designers at OpenTable have cleverly considered how to explain the hierarchy of their brand colours by presenting them and their supporting colours in various size chips. It’s also obvious from OpenTable’s design which colours are primary, supporting, accent or neutral without them having to say so. Art directing style guides For the style guides I design for my clients, I go beyond simply documenting their colour palette and type styles and describe visually what these mean for them and their brand. I work to find distinctive ways to present colour to better represent the brand and also to inspire designers. For example, on a recent project for SunLife, I described their palette of colours and how to use them across a series of art directed pages that reflect the lively personality of the SunLife brand. Information about HEX and RGB values, Sass variables and when to use their colours for branding, interaction and messaging is all there, but in a format that can appeal to both creative and technical people. SunLife style guide. Designs © Stuff & Nonsense. Purposeful style guides If you want to improve how you present colour information in your style guides, there’s plenty you can do. For a start, you needn’t confine colour information to the palette page in your style guide. Find imaginative ways to display colour across several pages to show it in context with other parts of your design. Here are two CSS gradient filled ‘cover’ pages from my Purposeful style sheets. Colour impacts other elements too, including typography, so make sure you include colour information on those pages, and vice-versa. Purposeful. Designs © Stuff & Nonsense. A visual hierarchy can be easier to understand than labelling colours as ‘primary,’ ‘supporting,’ or ‘accent,’ so find creative ways to present that hierarchy. You might use panels of different sizes or arrange boxes on a modular grid to fill a page with colour. Don’t limit yourself to rectangular colour chips, use circles or other shapes created using only CSS. If irregular shapes are a part of your brand, fill SVG silhouettes with CSS and then wrap text around them using CSS shapes. Purposeful. Designs © Stuff & Nonsense. Summing up In many ways I’m as frustrated with style guide design as I am with the general state of design on the web. Style guides and pattern libraries needn’t be dull in order to be functional. In fact, they’re the perfect place for you to try out new ideas and technologies. There’s nowhere better to experiment with new properties like CSS Grid than on your style guide. The best style guide designs showcase new approaches and possibilities, and don’t simply document the old ones. Be as creative with your style guide designs as you are with any public-facing part of your website. Purposeful are HTML and CSS style guides templates designed to help you develop creative style guides and pattern libraries for your business or clients. Save time while impressing your clients by using easily customisable HTML and CSS files that have been designed and coded to the highest standards. Twenty pages covering all common style guide components including colour, typography, buttons, form elements, and tables, plus popular pattern library components. Purposeful style guides will be available to buy online in January. Boring people ↩ 2016 Andy Clarke andyclarke 2016-12-13T00:00:00+00:00 https://24ways.org/2016/designing-imaginative-style-guides/ design
325 "Z's not dead baby, Z's not dead" While Mr. Moll and Mr. Budd have pipped me to the post with their predictions for 2006, I’m sure they won’t mind if I sneak in another. The use of positioning together with z-index will be one of next year’s hot techniques Both has been a little out of favour recently. For many, positioned layouts made way for the flexibility of floats. Developers I speak to often associate z-index with Dreamweaver’s layers feature. But in combination with alpha transparency support for PNG images in IE7 and full implementation of position property values, the stacking of elements with z-index is going to be big. I’m going to cover the basics of z-index and how it can be used to create designs which ‘break out of the box’. No positioning? No Z! Remember geometry? The x axis represents the horizontal, the y axis represents the vertical. The z axis, which is where we get the z-index, represents /depth/. Elements which are stacked using z-index are stacked from front to back and z-index is only applied to elements which have their position property set to relative or absolute. No positioning, no z-index. Z-index values can be either negative or positive and it is the element with the highest z-index value appears closest to the viewer, regardless of its order in the source. Furthermore, if more than one element are given the same z-index, the element which comes last in source order comes out top of the pile. Let’s take three <div>s. <div id="one"></div> <div id="two"></div> <div id="three"></div> #one { position: relative; z-index: 3; } #two { position: relative; z-index: 1; } #three { position: relative; z-index: 2; } As you can see, the <div> with the z-index of 3 will appear closest, even though it comes before its siblings in the source order. As these three <div>s have no defined positioning context in the form of a positioned parent such as a <div>, their stacking order is defined from the root element <html>. Simple stuff, but these building blocks are the basis on which we can create interesting interfaces (particularly when used in combination with image replacement and transparent PNGs). Brand building Now let’s take three more basic elements, an <h1>, <blockquote> and <p>, all inside a branding <div> which acts a new positioning context. By enclosing them inside a positioned parent, we establish a new stacking order which is independent of either the root element or other positioning contexts on the page. <div id="branding"> <h1>Worrysome.com</h1> <blockquote><p>Don' worry 'bout a thing...</p></blockquote> <p>Take the weight of the world off your shoulders.</p> </div> Applying a little positioning and z-index magic we can both set the position of these elements inside their positioning context and their stacking order. As we are going to use background images made from transparent PNGs, each element will allow another further down the stacking order to show through. This makes for some novel effects, particularly in liquid layouts. (Ed: We’re using n below to represent whatever values you require for your specific design.) #branding { position: relative; width: n; height: n; background-image: url(n); } #branding>h1 { position: absolute; left: n; top: n; width: n; height: n; background-image: url(h1.png); text-indent: n; } #branding>blockquote { position: absolute; left: n; top: n; width: n; height: n; background-image: url(bq.png); text-indent: n; } #branding>p { position: absolute; right: n; top: n; width: n; height: n; background-image: url(p.png); text-indent: n; } Next we can begin to see how the three elements build upon each other. 1. Elements outlined 2. Positioned elements overlayed to show context 3. Our final result Multiple stacking orders Not only can elements within a positioning context be given a z-index, but those positioning contexts themselves can also be stacked. Two positioning contexts, each with their own stacking order Interestingly each stacking order is independent of that of either the root element or its siblings and we can exploit this to make complex layouts from just a few semantic elements. This technique was used heavily on my recent redesign of Karova.com. Dissecting part of Karova.com First the XHTML. The default template markup used for the site places <div id="nav_main"> and <div id="content"> as siblings inside their container. <div id="container"> <div id="content"> <h2></h2> <p></p> </div> <div id="nav_main"></div> </div> By giving the navigation <div> a lower z-index than the content <div> we can ensure that the positioned content elements will always appear closest to the viewer, despite the fact that the navigation comes after the content in the source. #content { position: relative; z-index: 2; } #nav_main { position: absolute; z-index: 1; } Now applying absolute positioning with a negative top value to the <h2> and a higher z-index value than the following <p> ensures that the header sits not only on top of the navigation but also the styled paragraph which follows it. h2 { position: absolute; z-index: 200; top: -n; } h2+p { position: absolute; z-index: 100; margin-top: -n; padding-top: n; } Dissecting part of Karova.com You can see the full effect in the wild on the Karova.com site. Have a great holiday season! 2005 Andy Clarke andyclarke 2005-12-16T00:00:00+00:00 https://24ways.org/2005/zs-not-dead-baby-zs-not-dead/ design
7 Get Started With GitHub Pages (Plus Bonus Jekyll) After several failed attempts at getting set up with GitHub Pages, I vowed that if I ever figured out how to do it, I’d write it up. Fortunately, I did eventually figure it out, so here is my write-up. Why I think GitHub Pages is cool Normally when you host stuff on GitHub, you’re just storing your files there. If you push site files, what you’re storing is the code, and when you view a file, you’re viewing the code rather than the output. What GitHub Pages lets you do is store those files, and if they’re HTML files, you can view them like any other website, so there’s no need to host them separately yourself. GitHub Pages accepts static HTML but can’t execute languages like PHP, or use a database in the way you’re probably used to, so you’ll need to output static HTML files. This is where templating tools such as Jekyll come in, which I’ll talk about later. The main benefit of GitHub Pages is ease of collaboration. Changes you make in the repository are automatically synced, so if your site’s hosted on GitHub, it’s as up-to-date as your GitHub repository. This really appeals to me because when I just want to quickly get something set up, I don’t want to mess around with hosting; and when people submit a pull request, I want that change to be visible as soon as I merge it without having to set up web hooks. Before you get started If you’ve used GitHub before, already have an account and know the basics like how to set up a repository and clone it to your computer, you’re good to go. If not, I recommend getting familiar with that first. The GitHub site has extensive documentation on getting started, and if you’re not a fan of using the command line, the official GitHub apps for Mac and Windows are great. I also found this tutorial about GitHub Pages by Thinkful really useful, and it contains details on how to turn an existing repository into a GitHub Pages site. Although this involves a bit of using the command line, it’s minimal, and I’ll guide you through the basics. Setting up GitHub Pages For this demo we’re going to build a Christmas recipe site — nothing complex, just a place to store recipes so we can share them with people, and they can fork or suggest changes to ones they like. My GitHub username is maban, and the project I’ve set up is called christmas-recipes, so once I’ve set up GitHub Pages, the site can be found here: http://maban.github.io/christmas-recipes/ You can set up a custom domain, but by default, the URL for your GitHub Pages site is your-username.github.io/your-project-name. Set up the repository The first thing we’re going to do is create a new GitHub repository, in exactly the same way as normal, and clone it to the computer. Let’s give it the name christmas-recipes. There’s nothing in it at the moment, but that’s OK. After setting up the repository on the GitHub website, I cloned it to my computer in my Sites folder using the GitHub app (you can clone it somewhere else, if you want), and now I have a local repository synced with the remote one on GitHub. Navigate to the repository Now let’s open up the command line and navigate to the local repository. The easiest way to do this in Terminal is by typing cd and dragging and dropping the folder into the terminal window and pressing Return. You can refer to Chris Coyier’s GIF illustrating this very same thing, from last week’s 24 ways article “Grunt for People Who Think Things Like Grunt are Weird and Hard” (which is excellent). So, for me, that’s… cd /Users/Anna/Sites/christmas-recipes Create a special GitHub Pages branch So far we haven’t done anything different from setting up a regular repository, but here’s where things change. Now we’re in the right place, let’s create a gh-pages branch. This tells GitHub that this is a special branch, and to treat the contents of it differently. Make sure you’re still in the christmas-recipes directory, and type this command to create the gh-pages branch: git checkout --orphan gh-pages That --orphan option might be new to you. An orphaned branch is an empty branch that’s disconnected from the branch it was created off, and it starts with no commits, making it a special standalone branch. checkout switches us from the branch we were on to that branch. If all’s gone well, we’ll get a message saying Switched to a new branch ‘gh-pages’. You may get an error message saying you don’t have admin privileges, in which case you’ll need to type sudo at the start of that command. Make gh-pages your default branch (optional) The gh-pages branch is separate to the master branch, but by default, the master branch is what will show up if we go to our repository’s URL on GitHub. To change this, go to the repository settings and select gh-pages as the default branch. If, like me, you only want the one branch, you can delete the master branch by following Oli Studholme’s tutorial. It’s actually really easy to do, and means you only have to worry about keeping one branch up to date. If you prefer to work from master but push updates to the gh-pages branch, Lea Verou has written up a quick tutorial on how to do this, and it basically involves working from the master branch, and using git rebase to bring one branch up to date with another. At the moment, we’ve only got that branch on the local machine, and it’s empty, so to be able to see something at our special GitHub Pages URL, we’ll need to create a page and push it to the remote repository. Make a page Open up your favourite text editor, create a file called index.html in your christmas-recipes folder, and put some exciting text in it. Don’t worry about the markup: all we need is text because right now we’re just checking it works. Now, let’s commit and push our changes. You can do that in the command line if you’re comfortable with that, or you can do it via the GitHub app. Don’t forget to add a useful commit message. Now we’re ready to see our gorgeous new site! Go to your-username.github.io/your-project-name and, hopefully, you’ll see your first GitHub Pages site. If not, don’t panic, it can take up to ten minutes to publish, so you could make a quick cake in a cup while you wait. After a short wait, our page should be live! Hopefully that wasn’t too traumatic. Now we know it works, we can add some proper markup and CSS and even some more pages. If you’re feeling brave, how about we take it to the next level… Setting up Jekyll Since GitHub Pages can’t execute languages like PHP, we need to give it static HTML files. This is fine if there are only a few pages, but soon we’ll start to miss things like PHP includes for content that’s the same on every page, like headers and footers. Jekyll helps set up templates and turn them into static HTML. It separates markup from content, and makes it a lot easier for people to edit pages collaboratively. With our recipe site, we want to make it really easy for people to fix typos or add notes, without having to understand PHP. Also, there’s the added benefit that static HTML pages load really fast. Jekyll isn’t officially supported on Windows, but it is still possible to run it if you’re prepared to get your hands dirty. Install Jekyll Back in Terminal, we’re going to install Jekyll… gem install jekyll …and wait for the script to run. This might take a few moments. It might take so long that you get worried its broken, but resist the urge to start mashing your keyboard like I did. Get Jekyll to run on the repository Fingers crossed nothing has gone wrong so far. If something did go wrong, don’t give up! Check this helpful post by Andy Taylor – you probably just need to install something else first. Now we’re going to tell Jekyll to set up a new project in the repository, which is in my Sites folder (yours may be in a different place). Remember, we can drag the directory into the terminal window after the command. jekyll new /Users/Anna/Sites/christmas-recipes If everything went as expected, we should get this error message: Conflict: /Users/Anna/Sites/christmas-recipes exists and is not empty. But that’s OK. It’s just upset because we’ve got that index.html file and possibly also a README.md in there that we made earlier. So move those onto your desktop for the moment and run the command again. jekyll new /Users/Anna/Sites/christmas-recipes It should say that the site has installed. Check you’re in the repository, and if you’re not, navigate to it by typing cd , drag the christmas-recipes directory into terminal… jekyll cd /Users/Anna/Sites/christmas-recipes …and type this command to tell Jekyll to run: jekyll serve --watch By adding --watch at the end, we’re forcing Jekyll to rebuild the site every time we hit Save, so we don’t have to keep telling it to update every time we want to view the changes. We’ll need to run this every time we start work on the project, otherwise changes won’t be applied. For now, wait while it does its thing. Update the config file When it’s finished, we’ll see the text press ctrl-c to stop. Don’t do that, though. Instead, open up the directory. You’ll notice some new files and folders in there. There’s one called _site, and that’s where all the site files are saved when they’re turned into static HTML. Don’t touch the files in here — they’re the generated files and will get overwritten every time we make changes to pages and layouts. There’s a file in our directory called _config.yml. This has some settings we can change, one of them being what our base URL is. GitHub Pages will assume the base URL is above the project repository, so changing the settings here will help further down the line when setting up navigation links. Replace the contents of the _config.yml file with this: name: Christmas Recipes markdown: redcarpet pygments: true baseurl: /christmas-recipes Set up your files Overwrite the index.html file in the root with the one we made earlier (you might want to pop the README.md back in there, too). Delete the files in the css folder — we’ll add our own later. View the Jekyll site Open up your favourite browser and type http://localhost:4000/christmas-recipes in the address bar. Check it out, that’s your site! But it could do with a bit more love. Set up the _includes files It’s always useful to be able to pull in snippets of content onto pages, such as the header and footer, so they only need to be updated in one place. That’s what an _includes folder is for in Jekyll. Create a folder in the root called _includes, and within it add two files: head.html and foot.html. In head.html, paste the following: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>{{ page.title }}</title> <link rel="stylesheet" href="{{ site.baseurl }}/css/main.css" > </head> <body> and in foot.html: </body> </html> Whenever we want to pull in something from the _includes folder, we can use {% include filename.html %} in the layout file — I’ll show you how to set that up in next step. Making layouts In our directory, there’s a folder called _layouts and this lets us create a reusable template for pages. Inside that is a default.html file. Delete everything in default.html and paste in this instead: {% include head.html %} <h1>{{ page.title }}</h1> {{ content }} {% include foot.html %} That’s a very basic page with a header, footer, page title and some content. To apply this template to a page, go back into the index.html page and add this snippet to the very top of the file: --- layout: default title: Home --- Now save the index.html file and hit Refresh in the browser. We should see a heading where {{ page.title }} was in the layout, which matches what comes after title: on the page itself (in this case, Home). So, if we wanted a subheading to appear on every page, we could add {{ page.subheading }} to where we want it to appear in our layout file, and a line that says subheading: This is a subheading in between the dashes at the top of the page itself. Using Markdown for templates Anything on a page that sits under the closing dashes is output where {{ content }} appears in the template file. At the moment, this is being output as HTML, but we can use Markdown instead, and Jekyll will convert that into HTML. For this recipe site, we want to make it as easy as possible for people to be able to collaborate, and also have the markup separate from the content, so let’s use Markdown instead of HTML for the recipes. Telling a page to use Markdown instead of HTML is incredibly simple. All we need to do is change the filename from .html to .md, so let’s rename the index.html to index.md. Now we can use Markdown, and Jekyll will output that as HTML. Create a new layout We’re going to create a new layout called recipe which is going to be the template for any recipe page we create. Let’s keep it super simple. In the _layouts folder, create a file called recipe.html and paste in this: {% include head.html %} <main> <h1>{{ page.title }}</h1> {{ content }} <p>Recipe by <a href="{{ page.recipe-attribution-link }}">{{ page.recipe-attribution }}</a>.</p> </main> {% include nav.html %} {% include foot.html %} That’s our template. The content that goes on the recipe layout includes a page title, the recipe content, a recipe attribution and a recipe attribution link. Adding some recipe pages Create a new file in the root of the christmas-recipes folder and call it gingerbread.md. Paste the following into it: --- layout: recipe title: Gingerbread recipe-attribution: HungryJenny recipe-attribution-link: http://www.opensourcefood.com/people/HungryJenny/recipes/soft-christmas-gingerbread-cookies --- Makes about 15 small cookies. ## Ingredients * 175g plain flour * 90g brown sugar * 50g unsalted butter, diced, at room temperature * 2 tbsp golden syrup * 1 egg, beaten * 1 tsp ground ginger * 1 tsp cinnamon * 1 tsp bicarbonate of soda * Icing sugar to dust ## Method 1. Sift the flour, ginger, cinnamon and bicarbonate of soda into a large mixing bowl. 2. Use your fingers to rub in the diced butter. Mix in the sugar. 3. Mix the egg with the syrup then pour into the flour mixture. Fold in well to form a dough. 4. Tip some flour onto the work surface and knead the dough until smooth. 5. Preheat the oven to 180°C. 6. Roll the dough out flat and use a shaped cutter to make as many cookies as you like. 7. Transfer the cookies to a tray and bake in the oven for 15 minutes. Lightly dust the cookies with icing sugar. The content is in Markdown, and when we hit Save, it’ll be converted into HTML in the _site folder. Save the file, and go to http://localhost:4000/christmas-recipes/gingerbread.html in your favourite browser. As you can see, the Markdown content has been converted into HTML, and the attribution text and link has been inserted in the right place. Add some navigation In your _includes folder, create a new file called nav.html. Here is some code that will generate your navigation: <nav class="nav-primary" role="navigation" > <ul> {% for p in site.pages %} <li> <a {% if p.url == page.url %}class="active"{% endif %} href="{{ site.baseurl }}{{ p.url }}">{{ p.title }}</a> </li> {% endfor %} </ul> </nav> This is going to look for all pages and generate a list of them, and give the navigation item that is currently active a class of active so we can style it differently. Now we need to include that navigation snippet in our layout. Paste {% include nav.html %} in default.html file, under {{ content }}. Push the changes to GitHub Pages Now we’ve got a couple of pages, it’s time to push our changes to GitHub. We can do this in exactly the same way as before. Check your special GitHub URL (your-username.github.io/your-project-name) and you should see your site up and running. If you quit Terminal, don’t forget to run jekyll serve --watch from within the directory the next time you want to work on the files. Next steps Now we know the basics of creating Jekyll templates and publishing them as GitHub Pages, we can have some fun adding more pages and styling them up. Here’s one I made earlier I’ve only been using Jekyll for a matter of weeks, mainly for prototyping. It’s really good as a content management system for blogs, and a lot of people host their Jekyll blogs on GitHub, such as Harry Roberts By hosting the code so openly it will make me take more pride in it and allow me to work on it much more easily; no excuses now! Overall, the documentation for Jekyll feels a little sparse and geared more towards blogs than other sites, but as more people discover the benefits of it, I’m sure this will improve over time. If you’re interested in poking about with some code, all the files from this tutorial are available on GitHub. 2013 Anna Debenham annadebenham 2013-12-18T00:00:00+00:00 https://24ways.org/2013/get-started-with-github-pages/  
96 Unwrapping the Wii U Browser The Wii U was released on 18 November 2012 in the US, and 30 November in the UK. It’s the first eighth generation home console, the first mainstream second-screen device, and it has some really impressive browser specs. Consoles are not just for games now: they’re marketed as complete entertainment solutions. Internet connectivity and browser functionality have gone from a nice-to-have feature in game consoles to a selling point. In Nintendo’s case, they see it as a challenge to design an experience that’s better than browsing on a desktop. Let’s make a browser that users can use on a daily basis, something that can really handle everything we’ve come to expect from a browser and do it more naturally. Sasaki – Iwata Asks on Nintendo.com With 11% of people using console browsers to visit websites, it’s important to consider these devices right from the start of projects. Browsing the web on a TV or handheld console is a very different experience to browsing on a desktop or a mobile phone, and has many usability implications. Console browser testing When I’m testing a console browser, one of the first things I do is run Niels Leenheer’s HTML5 test and Lea Verou’s CSS3 test. I use these benchmarks as a rough comparison of the standards each browser supports. In October, IE9 came out for the Xbox 360, scoring 120/500 in the HTML5 test and 32% in the CSS3 test. The PS Vita also had an update to its browser in recent weeks, jumping from 58/500 to 243/500 in the HTML5 test, and 32% to 55% in the CSS3 test. Manufacturers have been stepping up their game, trying to make their browsing experiences better. To give you an idea of how the Wii U currently compares to other devices, here are the test results of the other TV consoles I’ve tested. I’ve written more in-depth notes on TV and portable console browsers separately. Year of releaseHTML5 scoreCSS3 scoreNotes Wii U2012258/50048%Runs a Netfront browser (WebKit). Wii200689/500Wouldn’t runRuns an Opera browser. PS3200668/50038%Runs a Netfront browser (WebKit). Xbox 3602005120/50032%A browser for the Xbox (IE9) was only recently released in October 2012. The Kinect provides voice and gesture support. There’s also SmartGlass, a second-screen app for platforms including Android and iOS. The Wii U browser is Nintendo’s fifth attempt at a console browser. Based on these tests, it’s already looking promising. Why console browsers used to suck It takes a lot of system memory to run a good browser, and the problem of older consoles is that they don’t have much memory available. The original Nintendo DS needs a memory expansion pack just to run the browser, because the 4MB it has on board isn’t enough. I noticed that even on newer devices, some sites fail to load because the system runs out of memory. The Wii came out six years ago with an Opera browser. Still being used today and with such low resources available, the latest browser features can’t reasonably be supported. There’s also pressure to add features such as tabs, and enable gamers to use the browser while a game is paused. Nintendo’s browser team have the advantage of higher specs to play with on their new console (1GB of memory dedicated to games, 1GB for the system), which makes it easier to support the latest standards. But it’s still a challenge to fit everything in. …even though we have more memory, the amount of memory we can use for the browser is limited compared to a PC, so we’ve worked in ways that efficiently allocates the available memory per tab. To work on this, the experience working on the browser for the Nintendo 3DS system under a limited memory constraint helped us greatly. Sasaki – Iwata Asks on Nintendo.com In the box The Wii U consists of a console unit which plugs into a TV (the first to support HD), and a wireless controller known as a gamepad. The gamepad is a lot bigger than typical TV console controllers, and it has a touchscreen on the front. The touchscreen is resistive, responding to pressure rather than electrical current. It’s intended to be used with a stylus (provided) but fingers can be used. It might look a bit like one, but the gamepad isn’t a portable console designed to be taken out like the PS Vita. The gamepad can be used as a standalone screen with the TV switched off, as long as it’s within range of the console unit – it basically piggybacks off it. It’s surprisingly lightweight for its size. It has a wealth of detectors including 9-axis control. Sensors wake the device from sleep when it’s picked up. There’s also a camera on the front, and a headphone port and speakers, with audio coming through both the TV and the gamepad giving a surround sound feel. Up to six tabs can be opened at once, and the browser can be used while games are paused. There’s a really nice little feature here – the current game’s name is saved as a search option, so it’s really quick to look up contextual content such as walk-throughs. Controls Only one gamepad can be used to control the browser, but if there are Wiimotes connected, they can be used as pointers. This doesn’t let the user do anything except point (they each get a little hand icon with a number on it displayed on the screen), but it’s interesting that multiple people can be interacting with a site at once. See a bigger version The gamepad can also be used as a simple TV remote control, with basic functions such as bringing up the programme guide, adjusting volume and changing channel. I found the simplified interface much more usable than a full-featured remote control. I’m used to scrolling being sluggish on consoles, but the Wii U feels almost as snappy as a desktop browser. Sites load considerably faster compared with others I’ve tested. Tilt-scroll Holding down ZL and ZR while tilting the screen activates an Instapaper-style tilt to scroll for going up and down the page quickly, useful for navigating very long pages. Second screen The TV mirrors most of what’s on the gamepad, although the TV screen just displays the contents of the browser window, while the gamepad displays the site along with the browser toolbar. When the user with the gamepad is typing, the keyboard is hidden from the TV screen – there’s just a bit of text at the top indicating what’s happening on the gamepad. Pressing X draws an on-screen curtain over the TV, hiding the content that’s on the gamepad from the TV. Pressing X again opens the curtains, revealing what’s on the gamepad. Holding the button down plays a drumroll before it’s released and the curtains are opened. I can imagine this being used in meetings as a fun presentation tool. In a sense, browsing is a personal activity, but you get the idea that people will be coming and going through the room. When I first saw the curtain function, it made a huge impression on me. I walked around with it all over the company saying, “They’ve really come up with something amazing!” Iwata – Iwata Asks on Nintendo.com Text Writing text Unlike the capacitive screens on smartphones, the Wii U’s resistive screen needs to be pressed harder than you’re probably used to for registering a touch event. The gamepad screen is big, which makes it much easier to type on this device than other handheld consoles, even without the stylus. It’s still more fiddly than a full-sized keyboard though. When you’re designing forms, consider the extra difficulty console users experience. Although TV screens are physically big, they are typically viewed from further away than desktop screens. This makes readability an issue, so Nintendo have provided not one, but four ways to zoom in and out: Double-tapping on the screen. Tapping the on-screen zoom icons in the browser toolbar. Pressing the + and - buttons on the device. Moving the right analogue stick up and down. As well as making it easy to zoom in and out, Nintendo have done a few other things to improve the reading experience on the TV. System font One thing you’ll notice pretty quickly is that the browser lacks all the fonts we’re used to falling back to. Serif fonts are replaced with the system’s sans-serif font. I couldn’t get Typekit’s font loading method to work but Fontdeck, which works slightly differently, does display custom fonts. The system font has been optimised for reading at a distance and is easy to distinguish because the lowercase e has a quirky little tilt. Don’t lose :focus Using the D-pad to navigate is similar to using a keyboard. Individual links are focused on, with a blue outline drawn around them. The recently redesigned An Event Apart site is an example that improves the experience for keyboard and D-pad users. They’ve added a yellow background colour to links on focus. It feels nicer than the default blue outline on its own. Media This year, television overtook PCs as the primary way to watch online video content. TV is the natural environment for video, and 42% of online TVs in the US are connected to the internet via a console. Unfortunately, the <video> tag isn’t supported in most console browsers, and those that have Flash installed often have such an old version that the video won’t play. I suspect this has been a big driver in getting console browsers to support web standards. The Wii U is designed with video content in mind. It doesn’t support Flash but it does support the HTML5 <video> tag. Some video formats can’t be played, but those that are supported bring up an optimised interface with a custom scrub bar. This is where the device switches from mirroring the TV to being a second screen. The full-screen video is displayed on the TV, and the interface on the gamepad. The really clever bit is that while a video is playing, the gamepad user can keep the video playing on the TV screen while searching for another video or browsing the web. This is the same for images too. On the left, the video is being shown full-screen on the TV and gamepad. Only the gamepad gets the scrub bar. Clicking the slide up/down button (circled) lets the gamepad user browse the web while the video on the TV continues to play full-screen, as shown in the image on the right. There’s support for SVG images, and they look great on a high-definition TV screen. However, there’s currently no way to save or download files. Preparing for console users I wasn’t expecting to be quite as impressed as I am by this browser. It’s encouraging to see console makers investing time into improving the experience as well as the standards support. In the same way there was an explosion in mobile browser use as the experience got better, I’m sure we’ll see the same with console browsers as the experience improves. The value of detection Consoles offer a range of inputs including gesture, voice and controller buttons. That means we have to think about more diverse methods of input than just touch and click. This is where I could tell you to add some detection methods such as user agent string sniffing to target a different experience for console users. But the majority of the time, that really isn’t necessary. TV console browsers are getting a lot better at compensating for the living room environment, and they’re designed to work with websites that haven’t been optimised for this context. Rather than tighten our grip on optimising experiences for every device out there, we’ve got to be pragmatic. There are so many new devices coming out every week, our designs need to be future-proof rather than fixed to a particular device in time. Even fuzzy device detection isn’t reliable – the PS Vita declares itself to be mobile, a mobile device and a Kindle Fire tablet, while the two DS devices state they’re neither mobile nor mobile phones nor tablets, but computers. They’re weird outliers, but they’re still important devices to consider. Thinking broadly about how our designs will be interacted with and viewed on a TV screen can help improve that experience for everyone. This is about accessibility. Considering how someone uses a site with a D-pad, we can improve the experience for keyboard users. When we think about colour contrast and text legibility on TV screens, we can apply that for anyone who reads content on the web. So why just offer this to the TV users? Playing with the browser …we want to prove to them through this Wii U Internet Browser that browsing itself can be an entertainment. Iwata – Iwata Asks on Nintendo.com Although I’m cautious about designing experiences for specific devices, it’s fun to experiment with the technology available. Nintendo have promised web developers access to the Wii U’s buttons and sensors. There’s already some JavaScript documentation, and a demo for you to try. If you’re interested in making your own games, thanks to web standards, a lot of HTML5 games work already on the device. Matt Hackett wrote up his experience of testing the game he built, and he talks about some of features the browser lacks. There’s certainly an incentive there for console manufacturers to improve their HTML5 support so more games can be played in their browser. What excites me about consoles is that it’s like looking at what might be available to us in future browsers. As well as thinking about how our sites work on small screens, we should also consider big screens. We’re already figuring out how images should work at different screen widths and connection speeds, but we’ve also got some interesting challenges ahead of us catering for second screen experiences and 3D-enabled devices. So, this Christmas, if you’re huddled round the game console or a smart TV, give the browser in it a try. 2012 Anna Debenham annadebenham 2012-12-22T00:00:00+00:00 https://24ways.org/2012/unwrapping-the-wii-u-browser/ ux
215 Teach the CLI to Talk Back The CLI is a daunting tool. It’s quick, powerful, but it’s also incredibly easy to screw things up in – either with a mistyped command, or a correctly typed command used at the wrong moment. This puts a lot of people off using it, but it doesn’t have to be this way. If you’ve ever interacted with Slack’s Slackbot to set a reminder or ask a question, you’re basically using a command line interface, but it feels more like having a conversation. (My favourite Slack app is Lunch Train which helps with the thankless task of herding colleagues to a particular lunch venue on time.) Same goes with voice-operated assistants like Alexa, Siri and Google Home. There are even games, like Lifeline, where you interact with a stranded astronaut via pseudo SMS, and KOMRAD where you chat with a Soviet AI. I’m not aiming to build an AI here – my aspirations are a little more down to earth. What I’d like is to make the CLI a friendlier, more forgiving, and more intuitive tool for new or reluctant users. I want to teach it to talk back. Interactive command lines in the wild If you’ve used dev tools in the command line, you’ve probably already used an interactive prompt – something that asks you questions and responds based on your answers. Here are some examples: Yeoman If you have Yeoman globally installed, running yo will start a command prompt. The prompt asks you what you’d like to do, and gives you options with how to proceed. Seasoned users will run specific commands for these options rather than go through this prompt, but it’s a nice way to start someone off with using the tool. npm If you’re a Node.js developer, you’re probably familiar with typing npm init to initialise a project. This brings up prompts that will populate a package.json manifest file for that project. The alternative would be to expect the user to craft their own package.json, which is more error-prone since it’s in JSON format, so something as trivial as an extraneous comma can throw an error. Snyk Snyk is a dev tool that checks for known vulnerabilities in your dependencies. Running snyk wizard in the CLI brings up a list of all the known vulnerabilities, and gives you options on how to deal with it – such as patching the issue, applying a fix by upgrading the problematic dependency, or ignoring the issue (you are then prompted for a reason). These decisions get mapped to the manifest and a .snyk file, and committed into the repo so that the settings are the same for everyone who uses that project. I work at Snyk, and running the wizard is what made me think about building my own personal assistant in the command line to help me with some boring, repetitive tasks. Writing your own Something I do a lot is add bookmarks to styleguides.io – I pull down the entire repo, copy and paste a template YAML file, and edit to contents. Sometimes I get it wrong and break the site. So I’ve been putting together a tool to help me add bookmarks. It’s called bookmarkbot – it’s a personal assistant squirrel called Mark who will collect and bury your bookmarks for safekeeping.* *Fortunately, this metaphor also gives me a charming excuse for any situation where bookmarks sometimes get lost – it’s not my poorly-written code, honest, it’s just being realistic because sometimes squirrels forget where they buried things! When you run bookmarkbot, it will ask you for some information, and save that information as a Markdown file in YAML format. For this demo, I’m going to use a Node.js package called inquirer, which is a well supported tool for creating command line prompts. I like it because it has a bunch of different question types; from input, which asks for some text back, confirm which expects a yes/no response, or a list which gives you a set of options to choose from. You can even nest questions, Choose Your Own Adventure style. Prerequisites Node.js npm RubyGems (Only if you want to go as far as serving a static site for your bookmarks, and you want to use Jekyll for it) Disclaimer Bear in mind that this is a really simplified walkthrough. It doesn’t have any error states, and it doesn’t handle the situation where we save a file with the same name. But it gets you in a good place to start building out your tool. Let’s go! Create a new folder wherever you keep your projects, and give it an awesome name (I’ve called mine bookmarks and put it in the Sites directory because I’m unimaginative). Now cd to that directory. cd Sites/bookmarks Let’s use that example I gave earlier, the trusty npm init. npm init Pop in the information you’d like to provide, or hit ENTER to skip through and save the defaults. Your directory should now have a package.json file in it. Now let’s install some of the dependencies we’ll need. npm install --save inquirer npm install --save slugify Next, add the following snippet to your package.json to tell it to run this file when you run npm start. "scripts": { … "start": "node index.js" } That index.js file doesn’t exist yet, so let’s create it in the root of our folder, and add the following: // Packages we need var fs = require('fs'); // Creates our file (part of Node.js so doesn't need installing) var inquirer = require('inquirer'); // The engine for our questions prompt var slugify = require('slugify'); // Will turn a string into a usable filename // The questions var questions = [ { type: 'input', name: 'name', message: 'What is your name?', }, ]; // The questions prompt function askQuestions() { // Ask questions inquirer.prompt(questions).then(answers => { // Things we'll need to generate the output var name = answers.name; // Finished asking questions, show the output console.log('Hello ' + name + '!'); }); } // Kick off the questions prompt askQuestions(); This is just some barebones where we’re including the inquirer package we installed earlier. I’ve stored the questions in a variable, and the askQuestions function will prompt the user for their name, and then print “Hello <your name>” in the console. Enough setup, let’s see some magic. Save the file, go back to the command line and run npm start. Extending what we’ve learnt At the moment, we’re just saving a name to a file, which isn’t really achieving our goal of saving bookmarks. We don’t want our tool to forget our information every time we talk to it – we need to save it somewhere. So I’m going to add a little function to write the output to a file. Saving to a file Create a folder in your project’s directory called _bookmarks. This is where the bookmarks will be saved. I’ve replaced my questions array, and instead of asking for a name, I’ve extended out the questions, asking to be provided with a link and title (as a regular input type), a list of tags (using inquirer’s checkbox type), and finally a description, again, using the input type. So this is how my code looks now: // Packages we need var fs = require('fs'); // Creates our file var inquirer = require('inquirer'); // The engine for our questions prompt var slugify = require('slugify'); // Will turn a string into a usable filename // The questions var questions = [ { type: 'input', name: 'link', message: 'What is the url?', }, { type: 'input', name: 'title', message: 'What is the title?', }, { type: 'checkbox', name: 'tags', message: 'Would you like me to add any tags?', choices: [ { name: 'frontend' }, { name: 'backend' }, { name: 'security' }, { name: 'design' }, { name: 'process' }, { name: 'business' }, ], }, { type: 'input', name: 'description', message: 'How about a description?', }, ]; // The questions prompt function askQuestions() { // Say hello console.log('🐿 Oh, hello! Found something you want me to bookmark?\n'); // Ask questions inquirer.prompt(questions).then((answers) => { // Things we'll need to generate the output var title = answers.title; var link = answers.link; var tags = answers.tags + ''; var description = answers.description; var output = '---\n' + 'title: "' + title + '"\n' + 'link: "' + link + '"\n' + 'tags: [' + tags + ']\n' + '---\n' + description + '\n'; // Finished asking questions, show the output console.log('\n🐿 All done! Here is what I\'ve written down:\n'); console.log(output); // Things we'll need to generate the filename var slug = slugify(title); var filename = '_bookmarks/' + slug + '.md'; // Write the file fs.writeFile(filename, output, function () { console.log('\n🐿 Great! I have saved your bookmark to ' + filename); }); }); } // Kick off the questions prompt askQuestions(); The output is formatted into YAML metadata as a Markdown file, which will allow us to turn it into a static HTML file using a build tool later. Run npm start again and have a look at the file it outputs. Getting confirmation Before the user makes critical changes, it’s good to verify those changes first. We’re going to add a confirmation step to our tool, before writing the file. More seasoned CLI users may favour speed over a “hey, can you wait a sec and just check this is all ok” step, but I always think it’s worth adding one so you can occasionally save someone’s butt. So, underneath our questions array, let’s add a confirmation array. // Packages we need … // The questions … // Confirmation questions var confirm = [ { type: 'confirm', name: 'confirm', message: 'Does this look good?', }, ]; // The questions prompt … As we’re adding the confirm step before the file gets written, we’ll need to add the following inside the askQuestions function: // The questions prompt function askQuestions() { // Say hello … // Ask questions inquirer.prompt(questions).then((answers) => { … // Things we'll need to generate the output … // Finished asking questions, show the output … // Confirm output is correct inquirer.prompt(confirm).then(answers => { // Things we'll need to generate the filename var slug = slugify(title); var filename = '_bookmarks/' + slug + '.md'; if (answers.confirm) { // Save output into file fs.writeFile(filename, output, function () { console.log('\n🐿 Great! I have saved your bookmark to ' + filename); }); } else { // Ask the questions again console.log('\n🐿 Oops, let\'s try again!\n'); askQuestions(); } }); }); } // Kick off the questions prompt askQuestions(); Now run npm start and give it a go! Typing y will write the file, and n will take you back to the start. Ideally, I’d store the answers already given as defaults so the user doesn’t have to start from scratch, but I want to keep this demo simple. Serving the files Now that your bookmarking tool is successfully saving formatted Markdown files to a folder, the next step is to serve those files in a way that lets you share them online. The easiest way to do this is to use a static-site generator to convert your YAML files into HTML, and pop them all on one page. Now, you’ve got a few options here and I don’t want to force you down any particular path, as there are plenty out there – it’s just a case of using the one you’re most comfortable with. I personally favour Jekyll because of its tight integration with GitHub Pages – I don’t want to mess around with hosting and deployment, so it’s really handy to have my bookmarks publish themselves on my site as soon as I commit and push them using Git. I’ll give you a very brief run-through of how I’m doing this with bookmarkbot, but I recommend you read my Get Started With GitHub Pages (Plus Bonus Jekyll) guide if you’re unfamiliar with them, because I’ll be glossing over some bits that are already covered in there. Setting up a build tool If you haven’t already, install Jekyll and Bundler globally through RubyGems. Jekyll is our static-site generator, and Bundler is what we use to install Ruby dependencies. gem install jekyll bundler In my project folder, I’m going to run the following which will install the Jekyll files we’ll need to build our listing page. I’m using --force, otherwise it will complain that the directory isn’t empty. jekyll new . --force If you check your project folder, you’ll see a bunch of new files. Now run the following to start the server: bundle exec jekyll serve This will build a new directory called _site. This is where your static HTML files have been generated. Don’t touch anything in this folder because it will get overwritten the next time you build. Now that serve is running, go to http://127.0.0.1:4000/ and you’ll see the default Jekyll page and know that things are set up right. Now, instead, we want to see our list of bookmarks that are saved in the _bookmarks directory (make sure you’ve got a few saved). So let’s get that set up next. Open up the _config.yml file that Jekyll added earlier. In here, we’re going to tell it about our bookmarks. Replace everything in your _config.yml file with the following: title: My Bookmarks description: These are some of my favourite articles about the web. markdown: kramdown baseurl: /bookmarks # This needs to be the same name as whatever you call your repo on GitHub. collections: - bookmarks This will make Jekyll aware of our _bookmarks folder so that we can call it later. Next, create a new directory and file at _layouts/home.html and paste in the following. <!doctype html> <html lang="en"> <head> <meta charset="UTF-8" /> <title>{{site.title}}</title> <meta name="description" content="{{site.description}}"> </head> <body> <h1>{{site.title}}</h1> <p>{{site.description}}</p> <ul> {% for bookmark in site.bookmarks %} <li> <a href="{{bookmark.link}}"> <h2>{{bookmark.title}}</h2> </a> {{bookmark.content}} {% if bookmark.tags %} <ul> {% for tags in bookmark.tags %}<li>{{tags}}</li>{% endfor %} </ul> {% endif %} </li> {% endfor %} </ul> </body> </html> Restart Jekyll for your config changes to kick in, and go to the url it provides you (probably http://127.0.0.1:4000/bookmarks, unless you gave something different as your baseurl). It’s a decent start – there’s a lot more we can do in this area but now we’ve got a nice list of all our bookmarks, let’s get it online! If you want to use GitHub Pages to host your files, your first step is to push your project to GitHub. Go to your repository and click “settings”. Scroll down to the section labelled “GitHub Pages”, and from here you can enable it. Select your master branch, and it will provide you with a url to view your published pages. What next? Now that you’ve got a framework in place for publishing bookmarks, you can really go to town on your listing page and make it your own. First thing you’ll probably want to do is add some CSS, then when you’ve added a bunch of bookmarks, you’ll probably want to have some filtering in place for the tags, perhaps extend the types of questions that you ask to include an image (if you’re feeling extra-fancy, you could just ask for a url and pull in metadata from the site itself). Maybe you’ve got an idea that doesn’t involve bookmarks at all. You could use what you’ve learnt to build a place where you can share quotes, a list of your favourite restaurants, or even Christmas gift ideas. Here’s one I made earlier My demo, bookmarkbot, is on GitHub, and I’ve reused a lot of the code from styleguides.io. Feel free to grab bits of code from there, and do share what you end up making! 2017 Anna Debenham annadebenham 2017-12-11T00:00:00+00:00 https://24ways.org/2017/teach-the-cli-to-talk-back/ code
244 It’s Beginning to Look a Lot Like XSSmas I dread the office Secret Santa. I have a knack for choosing well-meaning but inappropriate presents, like a bottle of port for a teetotaller, a cheese-tasting experience for a vegan, or heaven forbid, Spurs socks for an Arsenal supporter. Ok, the last one was intentional. It’s the same with gifting code. Once, I made a pattern library for A List Apart which I open sourced, and a few weeks later, a glaring security vulnerability was found in it. My gift was so generous that it enabled unrestricted access to any file on any public-facing server that hosted it. With platforms like GitHub and npm, giving the gift of code is so easy it’s practically a no-brainer. This giant, open source yankee swap helps us do our jobs without starting from scratch with every project. But like any gift-giving, it’s also risky. Vulnerabilities and Open Source Open source code is not inherently more or less vulnerable than closed-source code. What makes it higher risk is that the same piece of code gets reused in lots of places, meaning a hacker can use the same exploit mechanism on the same vulnerable code in different apps. Graph showing the number of open source vulnerabilities published per year, from the State of Open Source Security 2017 In the first 24 ways article this year, Katie referenced a few different types of vulnerability: Cross-site Request Forgery (also known as CSRF) SQL Injection Cross-site Scripting (also known as XSS) There are many more types of vulnerability, and those that live under the same category share similarities. For example, my favourite – is it weird to have a favourite vulnerability? – is Cross Site Scripting (XSS), which allows for the injection of scripts into web pages. This is a really common vulnerability often unwittingly added by developers. OWASP (the Open Web Application Security Project) wrote a great article about how to prevent opening the door to XSS attacks – share it generously with your colleagues. Most vulnerabilities like this are not added intentionally – they’re doors left ajar due to the way something has been scripted, like the over-generous code in my pattern library. Others, though, are added intentionally. A few months ago, a hacker, disguised as a helpful elf, offered to take over the maintenance of a popular npm package that had been unmaintained for a couple of years. The owner had moved onto other projects, and was keen to see it continue to be maintained by someone else, so transferred ownership. Fast-forward 3 months, it was discovered that the individual had quietly added a malicious package to the codebase, and the obfuscated code in it had been unwittingly installed onto thousands of apps. The code added was designed to harvest Bitcoin if it was run alongside another application. It was only spotted due to a developer’s curiosity. Another tactic to get developers to unwittingly install malicious packages into their codebase is “typosquatting” – back in August last year, npm reported that a user had been publishing packages with very similar names to popular packages (for example, crossenv instead of cross-env). This is a big wakeup call for open source maintainers. Techniques like this are likely to be used more as the maintenance of open source libraries becomes an increasing burden to their owners. After all, starting a new project often has a greater reward than maintaining an existing one, but remember, an open source library is for life, not just for Christmas. Santa’s on his sleigh If you use open source libraries, chances are that these libraries also use open source libraries. Your app may only have a handful of dependencies, but tucked in the back of that sleigh may be a whole extra sack of dependencies known as deep dependencies (ones that you didn’t directly install, but are dependencies of that dependency), and these can contain vulnerabilities too. Let’s look at the npm package santa as an example. santa has 8 direct dependencies listed on npm. That seems pretty manageable. But that’s just the tip of the iceberg – have a look at the full dependency tree which contains 109 dependencies – more dependencies than there are Christmas puns in this article. Only one of these direct dependencies has a vulnerability (at the time of writing), but there are actually 13 other known vulnerabilities in santa, which have been introduced through its deeper dependencies. Fixing vulnerabilities – the ultimate christmas gift If you’re a maintainer of open source libraries, taking good care of them is the ultimate gift you can give. Keep your dependencies up to date, use a security tool to monitor and alert you when new vulnerabilities are found in your code, and fix or patch them promptly. This will help keep the whole open source ecosystem healthy. When you find out about a new vulnerability, you have some options: Fix the vulnerability via an upgrade You can often fix a vulnerability by upgrading the library to the latest version. Make sure you’re using software that monitors your dependencies for new security issues and lets you know when a fix is ready, otherwise you may be unwittingly using a vulnerable version. Patch the vulnerable code Sometimes, a fix for a vulnerable library isn’t possible. This is often the case when a library is no longer being maintained, or the version of the library being used might be so out of date that upgrading it would cause a breaking change. Patches are bits of code that will fix that particular issue, but won’t change anything else. Switch to a different library If the library you’re using has no fix or patch, you may be better of switching it out for another one, particularly if it looks like it’s being unmaintained. Responsibly disclosing vulnerabilities Knowing how to responsibly disclose vulnerabilities is something I’m ashamed to admit that I didn’t know about before I joined a security company. But it’s so important! On discovering a new vulnerability, a developer has a few options: A malicious developer will exploit that vulnerability for their own gain. A reckless (or inexperienced) developer will disclose that vulnerability to the world without following a responsible disclosure process. This opens the door to an unethical developer exploiting the vulnerability. At Snyk, we monitor social media for mentions of newly found vulnerabilities so we can add them to our database and share fixes before they get exploited. An ethical and aware developer will follow what’s known as a “responsible disclosure process”. They will contact the maintainer of the code privately, allowing reasonable time for them to release a fix for the issue and to give others who use that vulnerable code a chance to fix it too. It’s important to understand this process if you’re a maintainer or contributor of code. It can be daunting when a report comes in, but understanding and following the right steps will help reduce the risk to the people who use that code. So what does responsible disclosure look like? I’ll take Node.js’s security disclosure policy as an example. They ask that all security issues that are found in Node.js are reported there. (There’s a separate process for bug found in third-party npm packages). Once you’ve reported a vulnerability, they promise to acknowledge it within 24 hours, and to give a more detailed response within 48 hours. If they find that the issue is indeed a security bug, they’ll give you regular updates about the progress they’re making towards fixing it. As part of this, they’ll figure out which versions are affected, and prepare fixes for them. They’ll assign the vulnerability a CVE (Common Vulnerabilities and Exposures) ID and decide on an embargo date for public disclosure. On the date of the embargo, they announce the vulnerability in their Node.js security mailing list and deploy fixes to nodejs.org. Tim Kadlec published an in-depth article about responsible disclosures if you’re interested in knowing more. It has some interesting horror stories of what happened when the disclosure process was not followed. Encourage responsible disclosure Add a SECURITY.md file to your project so someone who wants to message you about a vulnerability can do so without having to hunt around for contact details. Last year, Snyk published a State of Open Source Security report that found 79.5% of maintainers do not have a public disclosure policy. Those that did were considerably more likely to get notified privately about a vulnerability – 73% of maintainers who had one had been notified, vs 21% of maintainers who hadn’t published one one. Stats from the State of Open Source Security 2017 Bug bounties Some companies run bug bounties to encourage the responsible disclosure of vulnerabilities. By offering a reward for finding and safely disclosing a vulnerability, it also reduces the enticement of exploiting a vulnerability over reporting it and getting a quick cash reward. Hackerone is a community of ethical hackers who pentest apps that have signed up for the scheme and get paid when they find a new vulnerability. Wordpress is one such participant, and you can see the long list of vulnerabilities that have been disclosed as part of that program. If you don’t have such a bounty, be prepared to get the odd vulnerability extortion email. Scott Helme, who founded securityheaders.com and report-uri.com, wrote a post about some of the requests he gets for a report about a critical vulnerability in exchange for money. On one hand, I want to be as responsible as possible and if my users are at risk then I need to know and patch this issue to protect them. On the other hand this is such irresponsible and unethical behaviour that interacting with this person seems out of the question. A gift worth giving It’s time to brush the dust off those old gifts that we shared and forgot about. Practice good hygiene and run them through your favourite security tool – I’m just a little biased towards Snyk, but as Katie mentioned, there’s also npm audit if you use Node.js, and most source code managers like GitHub and GitLab have basic vulnerability alert capabilities. Stats from the State of Open Source Security 2017 Most importantly, patch or upgrade away those vulnerabilities away, and if you want to share that Christmas spirit, open fixes for your favourite open source projects, too. 2018 Anna Debenham annadebenham 2018-12-17T00:00:00+00:00 https://24ways.org/2018/its-beginning-to-look-a-lot-like-xssmas/ code
282 Front-end Style Guides We all know that feeling: some time after we launch a site, new designers and developers come in and make adjustments. They add styles that don’t fit with the content, use typefaces that make us cringe, or chuck in bloated code. But if we didn’t leave behind any documentation, we can’t really blame them for messing up our hard work. To counter this problem, graphic designers are often commissioned to produce style guides as part of a rebranding project. A style guide provides details such as how much white space should surround a logo, which typefaces and colours a brand uses, along with when and where it is appropriate to use them. Design guidelines Some design guidelines focus on visual branding and identity. The UK National Health Service (NHS) refer to theirs as “brand guidelines”. They help any designer create something such as a trustworthy leaflet for an NHS doctor’s surgery. Similarly, Transport for London’s “design standards” ensure the correct logos and typefaces are used in communications, and that they comply with the Disability Discrimination Act. Some guidelines go further, encompassing a whole experience, from the visual branding to the messaging, and the icon sets used. The BBC calls its guidelines a “Global Experience Language” or GEL. It’s essential for maintaining coherence across multiple sites under the same BBC brand. The BBC’s Global Experience Language. Design guidelines may be brief and loose to promote creativity, like Mozilla’s “brand toolkit”, or be precise and run to many pages to encourage greater conformity, such as Apple’s “Human Interface Guidelines”. Whatever name or form they’re given, documenting reusable styles is invaluable when maintaining a brand identity over time, particularly when more than one person (who may not be a designer) is producing material. Code standards documents We can make a similar argument for code. For example, in open source projects, where hundreds of developers are submitting code, it makes sense to set some standards. Drupal and Wordpress have written standards that make editing code less confusing for users, and more maintainable for contributors. Each community has nuances: Drupal requests that developers indent with two spaces, while Wordpress stipulates a tab. Whatever the rules, good code standards documents also explain why they make their recommendations. The front-end developer’s style guide Design style guides and code standards documents have been a successful way of ensuring brand and code consistency, but in between the code and the design examples, web-based style guides are emerging. These are maintained by front-end developers, and are more dynamic than visual design guidelines, documenting every component and its code on the site in one place. Here are a few examples I’ve seen in the wild: Natalie Downe’s pattern portfolio Natalie created the pattern portfolio system while working at Clearleft. The phrase describes a single HTML page containing all the site’s components and styles that can act as a deliverable. Pattern portfolio by Natalie Downe for St Paul’s School, kept up to date when new components are added. The entire page is about four times the length shown. Each different item within a pattern portfolio is a building block or module. The components are decoupled from the layout, and linearized so they can slot into anywhere on a page. The pattern portfolio expresses every component and layout structure in the smallest number of documents. It sets out how the markup and CSS should be, and is used to illustrate the project’s shared vocabulary. Natalie Downe By developing a system, rather than individual pages, the result is flexible when the client wants to add more pages later on. Paul Lloyd’s style guide Paul Lloyd has written an extremely comprehensive style guide for his site. Not only does it feature every plausible element, but it also explains in detail when it’s appropriate to use each one. Paul’s style guide is also great educational material for people learning to write code. Oli Studholme’s style guide Even though Oli’s style guide is specific to his site, he’s written it as though it’s for someone else. It’s exhaustive and gives justifications for all his decisions. In some places, he links to browser bug tickets and makes recommendations for cross-browser compatibility. Oli has released his style guide under a Creative Commons Attribution Share-alike license, and encourages others to create their own versions. Jeremy Keith’s pattern primer Front-end style guides may have comments written in the code, annotations that appear on the page, or they may list components alongside their code, like Jeremy’s pattern primer. You can watch or fork Jeremy’s pattern primer on Github. Linearizing components like this resembles a kind of mobile first approach to development, which Jeremy talks about on the 5by5 podcast: The Web Ahead 3. The benefits of maintaining a front-end style guide If you still need convincing that producing documentation like this for every project is worth the effort, here are a few nice side-effects to working this way: Easier to test A unified style guide makes it easier to spot where your design breaks. It’s simple to check how components adapt to different screen widths, test for browser bugs and develop print style sheets when everything is on the same page. When I worked with Natalie, she’d resize the browser window and bump the text size up and down during development to see if anything would break. Better workflow The approach also forces you to think how something works in relation to the whole site, rather than just a specific page, making it easier to add more pages later on. Starting development by creating a style guide makes a lot more sense than developing on a page-by-page basis. Shared vocabulary Natalie’s pattern portfolio in particular creates a shared vocabulary of names for components (teaser, global navigation, carousel…), so a team can refer to different regions of the site and have a shared understanding of its meaning. Useful reference A combined style guide also helps designers and writers to see the elements that will be incorporated in the site and, therefore, which need to be designed or populated. A boilerplate list of components for every project can act as a reminder of things that may get missed in the design, such as button states or error messages. Creating your front-end style guide As you’ve seen, there are plenty of variations on the web style guide. Which method is best depends on your project and workflow. Let’s say you want to show your content team how blockquotes and asides look, when it’s appropriate to use them, and how to create them within the CMS. In this case, a combination of Jeremy’s pattern primer and Paul’s descriptive style guide – with the styled component alongside a code snippet and a description of when to use it – may be ideal. Start work on your style guide as soon as you can, preferably during the design stage: Simply presenting flat image comps is by no means enough - it’s only the start… As layouts become more adaptable, flexible and context-specific, so individual components will become the focus of our design. It is therefore essential to get the foundational aspects of our designs right, and style guides allow us to do that. Paul Lloyd on Style guides for the Web Print out the designs and label the unique elements and components you’ll need to add to your style guide. Make a note of the purpose of each component. While you’re doing this, identify the main colours used for things like links, headings and buttons. I draw over the print-outs on to tracing paper so I can make more annotations. Here, I’ve started annotating the widths from the designer’s mockup so I can translate these into percentages. Start developing your style guide with base styles that target core elements: headings, links, tables, blockquotes, ordered lists, unordered lists and forms. For these elements, you could maintain a standard document to reuse for every project. Next, add the components that override the base styles, like search boxes, breadcrumbs, feedback messages and blog comments. Include interaction styles, such as hover, focus and visited state on links, and hover, focus and active states on buttons. Now start adding layout and begin slotting the components into place. You may want to present each layout as a separate document, or you could have them all on the same page stacked beneath one another. Document code practices Code can look messy when people use different conventions, so note down a standard approach alongside your style guide. For example, Paul Stanton has documented how he writes CSS. The gift wrapping Presenting this documentation to your client may be a little overwhelming so, to be really helpful, create a simple page that links together all your files and explains what each of them do. This is an example of a contents page that Clearleft produce for their clients. They’ve added date stamps, subversion revision numbers and written notes for each file. Encourage participation There’s always a risk that the person you’re writing the style guide for will ignore it completely, so make your documentation as user-friendly as possible. Justify why you do things a certain way to make it more approachable and encourage similar behaviour. As always, good communication helps. Working with the designer to put together this document will improve the site. It’s often not practical for designers to provide a style for everything, so drafting a web style guide and asking for feedback gives designers a chance to make sure any default styles fit in. If you work in a team with other developers, documenting your code and development decisions will not only be useful as a deliverable, but will also force you to think about why you do things a certain way. Future-friendly The roles of designer and developer are increasingly blurred, yet all too often we work in isolation. Working side-by-side with designers on web style guides can vastly improve the quality of our work, and the collaborative approach can spark discussions like “how would this work on a smaller screen?” Sometimes we can be so focused on getting the site ready and live, that we lose sight of what happens after it’s launched, and how it’s going to be maintained. A simple web style guide can make all the difference. If you make your own style guide, I’d love to add it to my stash of examples so please share a link to it in the comments. 2011 Anna Debenham annadebenham 2011-12-07T00:00:00+00:00 https://24ways.org/2011/front-end-style-guides/ process
73 How to Make Your Site Look Half-Decent in Half an Hour Programmers like me are often intimidated by design – but a little effort can give a huge return on investment. Here are one coder’s tips for making any site quickly look more professional. I am a programmer. I am not a designer. I have a degree in computer science, and I don’t mind Comic Sans. (It looks cheerful. Move on.) But although I am a programmer, I want to make my sites look attractive. This is partly out of vanity, and partly realism. Vanity because I want people to think my work is good, and realism because the research shows that people won’t think a site is credible unless it also looks attractive. For a very long time after I became a programmer, I was scared of design. Design seemed to consist of complicated rules that weren’t written down anywhere, plus an unlearnable sense of taste, possessed only by a black-clad elite. But a little while ago, I decided to do my best to hack what it took to make my own projects look vaguely attractive. And although this doesn’t come close to the effect a professional designer could achieve, gathering these resources for improving a site’s look and feel has been really helpful. If I hadn’t figured out some basic design shortcuts, it’s unlikely that a weekend hack of mine would have ended up on page three of the Daily Mail. And too often now, I see excellent programming projects that don’t reach the audience they deserve, simply because their design doesn’t match their execution. So, if you are a developer, my Christmas present to you is this: my own collection of hacks that, used rightly, can make your personal programming projects look professional, quickly. None are hard to learn, most are free, and they let you focus on writing code. One thing to note about these tips, though. They are a personal, pragmatic compilation. They are suggestions, not a definitive guide. You will definitely get better results by working with a professional designer, and by studying design more deeply. If you are a designer, I would love to hear your suggestions for the best tools that I’ve missed, and I’d love to know how we programmers can do things better. With that, on to the tools… 1. Use Bootstrap If you’re not already using Bootstrap, start now. I really think that Bootstrap is one of the most significant technical achievements of the last few years: it democratizes the whole process of web design. Essentially, Bootstrap is a a grid system, with a bunch of common elements. So you can lay your site out how you want, drop in simple elements like forms and tables, and get a good-looking, consistent result, without spending hours fiddling with CSS. You just need HTML. Another massive upside is that it makes it easy to make any site responsive, so you don’t have to worry about writing media queries. Go, get Bootstrap and check out the examples. To keep your site lightweight, you can customize your download to include only the elements you want. If you have more time, then Mark Otto’s article on why and how Bootstrap was created at Twitter is well worth a read. 2. Pimp Bootstrap Using Bootstrap is already a significant advance on not using Bootstrap, and massively reduces the tedium of front-end development. But you also run the risk of creating Yet Another Bootstrap Site, or Hack Day Design, as it’s known. If you’re really pressed for time, you could buy a theme from Wrap Bootstrap. These are usually created by professional designers, and will give a polish that we can’t achieve ourselves. Your site won’t be unique, but it will look good quickly. Luckily, it’s pretty easy to make Bootstrap not look too much like Bootstrap – using fonts, CSS effects, background images, colour schemes and so on. Most of the rest of this article covers different ways to achieve this. We are going to customize this Bootstrap example page. This already has some custom CSS in the <head>. We’ll pull it all out, and create a new CSS file, custom.css. Then we add a reference to it in the header. Now we’re ready to start customizing things. 3. Fonts Web fonts are one of the quickest ways to make your site look distinctive, modern, and less Bootstrappy, so we’ll start there. First, we can add some sweet fonts, from Google Web Fonts. The intimidating bit is choosing fonts that look nice together. Luckily, there are plenty of suggestions around the web: we’re going to use one of DesignShack’s suggested free Google Fonts pairings. Our fonts are Corben (for headings) and Nobile (for body copy). Then we add these files to our <head>. <link href="http://fonts.googleapis.com/css?family=Corben:bold" rel="stylesheet" type="text/css"> <link href="http://fonts.googleapis.com/css?family=Nobile" rel="stylesheet" type="text/css"> …and this to custom.css: h1, h2, h3, h4, h5, h6 { font-family: 'Corben', Georgia, Times, serif; } p, div { font-family: 'Nobile', Helvetica, Arial, sans-serif; } Now our example looks like this. It’s not going to win any design awards, but it’s immediately better: I also recommend the web font services Fontdeck, or Typekit – these have a wider selection of fonts, and are worth the investment if you regularly need to make sites look good. For more font combinations, Just My Type suggests appealing pairings from Typekit. Finally, you can experiment with type pairing ideas at Type Connection. For the design background on pairing fonts, Typekit’s post is worth a read. 4. Textures An instant way to make a site look classy is to use textures. You know the grey, stripy, indefinably elegant background on 24ways.org? That. If only there was a superb resource listing attractive, free, ready-to-use textures… Oh wait, there’s Atle Mo’s Subtle Patterns. We’re going to use Cream Dust, for an effect that can only be described as subtle. We download the file to a new /img/ directory, then add this to the CSS file: body { background: url(/img/cream_dust.png) repeat 0 0; } Bang: For some design background on patterns, I recommend reading through Smashing Magazine’s guidelines on textures. (TL;DR version: use textures to enhance beauty, and clarify the information architecture of your site; but don’t overdo it, or inadvertently obscure your text.) Still more to do, though. Onwards. 5. Icons Last year’s 24 ways taught us to use icon fonts for our site icons. This is great for the time-pressed coder, because icon fonts don’t just cut down on HTTP requests – they’re a lot quicker to set up than image-based icons, too. Bootstrap ships with an extensive, free for commercial use icon set in the shape of Font Awesome. Its icons are safe for screen readers, and can even be made to work in IE7 if needed (we’re not going to bother here). To start using these icons, just download Font Awesome, and add the /fonts/ directory to your site and the font-awesome.css file into your /css/ directory. Then add a reference to the CSS file in your header: <link rel="stylesheet" href="/css/font-awesome.css"> Finally, we’ll add a truck icon to the main action button, as follows. Why a truck? Why not? <a class="btn btn-large btn-success" href="#"><i class="icon-truck"></i> Sign up today</a> We’ll also tweak our CSS file to stop the icon nudging up against the button text: .jumbotron .btn i { margin-right: 8px; } And this is how it looks: Not the most exciting change ever, but it livens up the page a bit. The licence is CC-BY-3.0, so we also include a mention of Font Awesome and its URL in the source code. If you’d like something a little more distinctive, Shifticons lets you pay a few cents for individual icons, with the bonus that you only have to serve the icons you actually use, which is more efficient. Its icons are also friendly to screen readers, but won’t work in IE7. 6. CSS3 The next thing you could do is add some CSS3 goodness. It can really help the key elements of the site stand out. If you are pressed for time, just adding box-shadow and text-shadow to emphasize headings and standouts can be useful: h1 { text-shadow: 1px 1px 1px #ccc; } .div-that-you want-to-stand-out { box-shadow: 0 0 1em 1em #ccc; } We have a little more time though, so we’re going to do something more subtle. We’ll add a radial gradient behind the main heading, using an online gradient editor. The output is hefty, but you can see it in the CSS. Note that we also need to add the following to our HTML, for IE9 support: <!--[if gte IE 9]> <style type="text/css"> .gradient { filter: none; } </style> <![endif]--> And the effect – I don’t know what a designer would think, but I like the way it makes the heading pop. For a crash course in useful modern CSS effects, I highly recommend CodeSchool’s online course in Functional HTM5 and CSS3. It costs money ($25 a month to subscribe), but it’s worth it for the time you’ll save. As a bonus, you also get access to some excellent JavaScript, Ruby and GitHub courses. (Incidentally, if you find yourself fighting with basic float and display attributes in CSS – and there’s no shame in it, CSS layout is not intuitive – I recommend the CSS Cross-Country course at CodeSchool.) 7. Add a twist We could leave it there, but we’re going to add a background image, and give the site some personality. This is the area of design that I think many programmers find most intimidating. How do we create the graphics and photographs that a designer would use? The answer is iStockPhoto and its competitors – online image libraries where you can find and pay for images. They won’t be unique, but for our purposes, that’s fine. We’re going to use a Christmassy image. For a twist, we’re going to use Backstretch to make it responsive. We must pay for the image, then download it to our /img/ directory. Then, we set it as our <body>’s background-image, by including a JavaScript file with just the following line: $.backstretch("/img/winter.jpg"); We also reset the subtle pattern to become the background for our container image. It would look much better transparent, so we can use this technique in GIMP to make it see-through: .container-narrow { background: url(/img/cream_dust_transparent.png) repeat 0 0; } We also fiddle with the padding on body and .container-narrow a bit, and this is the result: (Aside: If this were a real site, I’d want to buy images in multiple sizes and ensure that Backstretch chose the most appropriately sized image for our screen, perhaps using responsive images.) How to find the effects that make a site interesting? I keep a set of bookmarks for interesting JavaScript and CSS effects I might want to use someday, from realistic shadows to animating grids. The JavaScript Weekly newsletter is a great source of ideas. 8. Colour schemes We’re just about getting there – though we’re probably past half an hour now – but that button and that menu still both look awfully Bootstrappy. Real sites, with real designers, have a colour palette, carefully chosen to harmonize and match the brand profile. For our purposes, we’re just going to borrow some colours from the image. We use Gimp’s colour picker tool to identify the hex values of the blue of the snow. Then we can use Color Scheme Designer to find contrasting, but complementary, colours. Finally, we use those colours for our central button. There are lots of tools to help us do this, such as Bootstrap Buttons. The new HTML is quite long, so I won’t paste it all here, but you can find it in the CSS file. We also reset the colour of the pills in the navigation menu, which is a bit easier: .nav-pills > .active > a, .nav-pills > .active > a:hover { background-color: #FF9473; } I’m not sure if the result is great to be honest, but at least we’ve lost those Bootstrap-blue buttons: Another way to do it, if you didn’t have an image to match, would be to borrow an attractive colour scheme. Colourlovers is a community where people create and share ready-made colour palettes. The key thing is to find a palette with an open licence, so you can legitimately use it. Unfortunately, you can’t search for palettes by licence type, but many do have open licences. Here’s a popular palette with a CC-BY-SA licence that allows reuse with attribution. As above, you can use the hex values from the palette in your custom CSS, and bask in the newly colourful results. 9. Read on With the above techniques, you can make a site that is starting to look slightly more professional, pretty quickly. If you have the time to invest, it’s well worth learning some design principles, if only so that design seems less intimidating and more like fun. As part of my design learning, I read a few introductory design books aimed at coders. The best I found was David Kadavy’s Design for Hackers: Reverse-Engineering Beauty, which explains the basic principles behind choosing colours, fonts, typefaces and layout. In the introduction to his book, David writes: No group stands to gain more from design literacy than hackers do… The one subject that is exceedingly frustrating for hackers to try to learn is design. Hackers know that in order to compete against corporate behemoths with just a few lines of code, they need to have good, clear design, but the resources with which to learn design are simply hard to find. Well said. If you have half a day to invest, rather than half an hour, I recommend getting hold of David’s book. And the journey is over. Perhaps that took slightly more than half an hour, but with practice, using the above techniques can become second nature. What useful tools have I missed? Designers, how would you improve on the above? I would love to know, so please give me your views in the comments. 2012 Anna Powell-Smith annapowellsmith 2012-12-16T00:00:00+00:00 https://24ways.org/2012/how-to-make-your-site-look-half-decent/ design
254 What I Learned in Six Years at GDS When I joined the Government Digital Service in April 2012, GOV.UK was just going into public beta. GDS was a completely new organisation, part of the Cabinet Office, with a mission to stop wasting government money on over-complicated and underperforming big IT projects and instead deliver simple, useful services for the public. Lots of people who were experts in their fields were drawn in by this inspiring mission, and I learned loads from working with some true leaders. Here are three of the main things I learned. 1. What is the user need? 
The main discipline I learned from my time at GDS was to always ask ‘what is the user need?’ It’s very easy to build something that seems like a good idea, but until you’ve identified what problem you are solving for the user, you can’t be sure that you are building something that is going to help solve an actual problem. A really good example of this is GOV.UK Notify. This service was originally conceived of as a status tracker; a “where’s my stuff” for government services. For example, if you apply for a passport online, it can take up to six weeks to arrive. After a few weeks, you might feel anxious and phone the Home Office to ask what’s happening. The idea of the status tracker was to allow you to get this information online, saving your time and saving government money on call centres. The project started, as all GDS projects do, with a discovery. The main purpose of a discovery is to identify the users’ needs. At the end of this discovery, the team realised that a status tracker wasn’t the way to address the problem. As they wrote in this blog post: Status tracking tools are often just ‘channel shift’ for anxiety. They solve the symptom and not the problem. They do make it more convenient for people to reduce their anxiety, but they still require them to get anxious enough to request an update in the first place. What would actually address the user need would be to give you the information before you get anxious about where your passport is. For example, when your application is received, email you to let you know when to expect it, and perhaps text you at various points in the process to let you know how it’s going. So instead of a status tracker, the team built GOV.UK Notify, to make it easy for government services to incorporate text, email and even letter notifications into their processes. Making sure you know your user At GDS user needs were taken very seriously. We had a user research lab on site and everyone was required to spend two hours observing user research every six weeks. Ideally you’d observe users working with things you’d built, but even if they weren’t, it was an incredibly valuable experience, and something you should seek out if you are able to. Even if we think we understand our users very well, it is very enlightening to see how users actually use your stuff. Partly because in technology we tend to be power users and the average user doesn’t use technology the same way we do. But even if you are building things for other developers, someone who is unfamiliar with it will interact with it in a way that may be very different to what you have envisaged. User needs is not just about building things Asking the question “what is the user need?” really helps focus on why you are doing what you are doing. It keeps things on track, and helps the team think about what the actual desired end goal is (and should be). Thinking about user needs has helped me with lots of things, not just building services. For example, you are raising a pull request. What’s the user need? The reviewer needs to be able to easily understand what the change you are proposing is, why you are proposing that change and any areas you need particular help on with the review. Or you are writing an email to a colleague. What’s the user need? What are you hoping the reader will learn, understand or do as a result of your email? 2. Make things open: it makes things better The second important thing I learned at GDS was ‘make things open: it makes things better’. This works on many levels: being open about your strategy, blogging about what you are doing and what you’ve learned (including mistakes), and – the part that I got most involved in – coding in the open. Talking about your work helps clarify it One thing we did really well at GDS was blogging – a lot – about what we were working on. Blogging about what you are working on is is really valuable for the writer because it forces you to think logically about what you are doing in order to tell a good story. If you are blogging about upcoming work, it makes you think clearly about why you’re doing it; and it also means that people can comment on the blog post. Often people had really useful suggestions or clarifying questions. It’s also really valuable to blog about what you’ve learned, especially if you’ve made a mistake. It makes sure you’ve learned the lesson and helps others avoid making the same mistakes. As well as blogging about lessons learned, GOV.UK also publishes incident reports when there is an outage or service degradation. Being open about things like this really engenders an atmosphere of trust and safe learning; which helps make things better. Coding in the open has a lot of benefits In my last year at GDS I was the Open Source Lead, and one of the things I focused on was the requirement that all new government source code should be open. From the start, GDS coded in the open (the GitHub organisation still has the non-intuitive name alphagov, because it was created by the team doing the original Alpha of GOV.UK, before GDS was even formed). When I first joined GDS I was a little nervous about the fact that anyone could see my code. I worried about people seeing my mistakes, or receiving critical code reviews. (Setting people’s mind at rest about these things is why it’s crucial to have good standards around communication and positive behaviour - even a critical code review should be considerately given). But I quickly realised there were huge advantages to coding in the open. In the same way as blogging your decisions makes you think carefully about whether they are good ones and what evidence you have, the fact that anyone in the world could see your code (even if, in practice, they probably won’t be looking) makes everyone raise their game slightly. The very fact that you know it’s open, makes you make it a bit better. It helps with lots of other things as well, for example it makes it easier to collaborate with people and share your work. And now that I’ve left GDS, it’s so useful to be able to look back at code I worked on to remember how things worked. Share what you learn It’s sometimes hard to know where to start with being open about things, but it gets easier and becomes more natural as you practice. It helps you clarify your thoughts and follow through on what you’ve decided to do. Working at GDS when this was a very important principle really helped me learn how to do this well. 3. Do the hard work to make it simple (tech edition) ‘Start with user needs’ and ‘Make things open: it makes things better’ are two of the excellent government design principles. They are all good, but the third thing that I want to talk about is number 4: ‘Do the hard work to make it simple’, and specifically, how this manifests itself in the way we build technology. At GDS, we worked very hard to do the hard work to make the code, systems and technology we built simple for those who came after us. For example, writing good commit messages is taken very seriously. There is commit message guidance, and it was not unusual for a pull request review to ask for a commit message to be rewritten to make a commit message clearer. We worked very hard on making pull requests good, keeping the reviewer in mind and making it clear to the user how best to review it. Reviewing others’ pull requests is the highest priority so that no-one is blocked, and teams have screens showing the status of open pull requests (using fourth wall) and we even had a ‘pull request seal’, a bot that publishes pull requests to Slack and gets angry if they are uncommented on for more than two days. Making it easier for developers to support the site Another example of doing the hard work to make it simple was the opsmanual. I spent two years on the web operations team on GOV.UK, and one of the things I loved about that team was the huge efforts everyone went to to be open and inclusive to developers. The team had some people who were really expert in web ops, but they were all incredibly helpful when bringing me on board as a developer with no previous experience of web ops, and also patiently explaining things whenever other devs in similar positions came with questions. The main artefact of this was the opsmanual, which contained write-ups of how to do lots of things. One of the best things was that every alert that might lead to someone being woken up in the middle of the night had a link to documentation on the opsmanual which detailed what the alert meant and some suggested actions that could be taken to address it. This was important because most of the devs on GOV.UK were on the on-call rota, so if they were woken at 3am by an alert they’d never seen before, the opsmanual information might give them everything they needed to solve it, without the years of web ops training and the deep familiarity with the GOV.UK infrastructure that came with working on it every day. Developers are users too Doing the hard work to make it simple means that users can do what they need to do, and this applies even when the users are your developer peers. At GDS I really learned how to focus on simplicity for the user, and how much better this makes things work. These three principles help us make great things I learned so much more in my six years at GDS. For example, the civil service has a very fair way of interviewing. I learned about the importance of good comms, working late, responsibly and the value of content design. And the real heart of what I learned, the guiding principles that help us deliver great products, is encapsulated by the three things I’ve talked about here: think about the user need, make things open, and do the hard work to make it simple. 2018 Anna Shipman annashipman 2018-12-08T00:00:00+00:00 https://24ways.org/2018/what-i-learned-in-six-years-at-gds/ business
292 Watch Your Language! I’m bilingual. My first language is French. I learned English in my early 20s. Learning a new language later in life meant that I was able to observe my thought processes changing over time. It made me realize that some concepts can’t be expressed in some languages, while other languages express these concepts with ease. It also helped me understand the way we label languages. English: business. French: romance. Here’s an example of how words, or the absence thereof, can affect the way we think: In French we love everything. There’s no straightforward way to say we like something, so we just end up loving everything. I love my sisters, I love broccoli, I love programming, I love my partner, I love doing laundry (this is a lie), I love my mom (this is not a lie). I love, I love, I love. It’s no wonder French is considered romantic. When I first learned English I used the word love rather than like because I hadn’t grasped the difference. Needless to say, I’ve scared away plenty of first dates! Learning another language made me realize the limitations of my native language and revealed concepts I didn’t know existed. Without the nuances a given language provides, we fail to express what we really think. The absence of words in our vocabulary gets in the way of effectively communicating and considering ideas. When I lived in Montréal, most people in my circle spoke both French and English. I could switch between them when I could more easily express an idea in one language or the other. I liked (or should I say loved?) those conversations. They were meaningful. They were efficient. I’m quadrilingual. I code in Ruby, HTML/CSS, JavaScript, Python. In the past couple of years I have been lucky enough to write code in these languages at a massive scale. In learning Ruby, much like learning English, I discovered the strengths and limitations of not only the languages I knew but the language I was learning. It taught me to choose the right tool for the job. When I started working at Shopify, making a change to a view involved copy/pasting HTML and ERB from one view to another. The CSS was roughly structured into modules, but those modules were not responsive to different screen sizes. Our HTML was complete mayhem, and we didn’t consider accessibility. All this made editing views a laborious process. Grep. Replace all. Test. Ship it. Repeat. This wasn’t sustainable at Shopify’s scale, so the newly-formed front end team was given two missions: Make the app responsive (AKA Let’s Make This Thing Responsive ASAP) Make the view layer scalable and maintainable (AKA Let’s Build a Pattern Library… in Ruby) Let’s make this thing responsive ASAP The year was 2015. The Shopify admin wasn’t mobile friendly. Our browser support was set to IE10. We had the wind in our sails. We wanted to achieve complete responsiveness in the shortest amount of time. Our answer: container queries. It seemed like the obvious decision at the time. We would be able to set rules for each component in isolation and the component would know how to lay itself out on the page regardless of where it was rendered. It would save us a ton of development time since we wouldn’t need to change our markup, it would scale well, and we would achieve complete component autonomy by not having to worry about page layout. By siloing our components, we were going to unlock the ultimate goal of componentization, cutting the tie to external dependencies. We were cool. Writing the JavaScript handling container queries was my first contribution to Shopify. It was a satisfying project to work on. We could drop our components in anywhere and they would magically look good. It took us less than a couple weeks to push this to production and make our app mostly responsive. But with time, it became increasingly obvious that this was not as performant as we had hoped. It wasn’t performant at all. Components would jarringly jump around the page before settling in on first paint. It was only when we started using the flex-wrap: wrap CSS property to build new components that we realized we were not using the right language for the job. So we swapped out JavaScript container queries for CSS flex-wrapping. Even though flex wasn’t yet as powerful as we wanted it to be, it was still a good compromise. Our components stayed independent of the window size but took much less time to render. Best of all: they used CSS instead of relying on JavaScript for layout. In other words: we were using the wrong language to express our layout to the browser, when another language could do it much more simply and elegantly. Let’s build a pattern library… in Ruby In order to make our view layer maintainable, we chose to build a comprehensive library of helpers. This library would generate our markup from a single source of truth, allowing us to make changes system-wide, in one place. No. More. Grepping. When I joined Shopify it was a Rails shop freshly wounded by a JavaScript framework (See: Batman.js). JavaScript was like Voldemort, the language that could not be named. Because of this baggage, the only way for us to build a pattern library that would get buyin from our developers was to use Rails view helpers. And for many reasons using Ruby was the right choice for us. The time spent ramping developers up on the new UI Components would be negligible since the Ruby API felt familiar. The transition would be simple since we didn’t have to introduce any new technology to the stack. The components would be fast since they would be rendered on the server. We had a plan. We put in place a set of Rails tools to make it easy to build components, then wrote a bunch of sweet, sweet components using our shiny new tools. To document our design, content and front end patterns we put together an interactive styleguide to demonstrate how every component works. Our research and development department loved it (and still do)! We continue to roll out new components, and generally the project has been successful, though it has had its drawbacks. Since the Shopify admin is mostly made up of a huge number of forms, most of the content is static. For this reason, using server-rendered components didn’t seem like a problem at the time. With new app features increasing the amount of DOM manipulation needed on the client side, our early design decisions mean making requests to the server for each re-paint. This isn’t going to cut it. I don’t know the end of this story, because we haven’t written it yet. We’ve been exploring alternatives to our current system to facilitate the rendering of our components on the client, including React, Vue.js, and Web Components, but we haven’t determined the winner yet. Only time (and data gathering) will tell. Ruby is great but it doesn’t speak the browser’s language efficiently. It was not the right language for the job. Learning a new spoken language has had an impact on how I write code. It has taught me that you don’t know what you don’t know until you have the language to express it. Understanding the strengths and limitations of any programming language is fundamental to making good design decisions. At the end of the day, you make the best choices with the information you have. But if you still feel like you’re unable to express your thoughts to the fullest with what you know, it might be time to learn a new language. 2016 Annie-Claude Côté annieclaudecote 2016-12-10T00:00:00+00:00 https://24ways.org/2016/watch-your-language/ code
152 CSS for Accessibility CSS is magical stuff. In the right hands, it can transform the plainest of (well-structured) documents into a visual feast. But it’s not all fur coat and nae knickers (as my granny used to say). Here are some simple ways you can use CSS to improve the usability and accessibility of your site. Even better, no sexy visuals will be harmed by the use of these techniques. Promise. Nae knickers This is less of an accessibility tip, and more of a reminder to check that you’ve got your body background colour specified. If you’re sitting there wondering why I’m mentioning this, because it’s a really basic thing, then you might be as surprised as I was to discover that from a sample of over 200 sites checked last year, 35% of UK local authority websites were missing their body background colour. Forgetting to specify your body background colour can lead to embarrassing gaps in coverage, which are not only unsightly, but can prevent your users reading the text on your site if they use a different operating system colour scheme. All it needs is the following line to be added to your CSS file: body {background-color: #fff;} If you pair it with color: #000; … you’ll be assured of maintaining contrast for any areas you inadvertently forget to specify, no matter what colour scheme your user needs or prefers. Even better, if you’ve got standard reset CSS you use, make sure that default colours for background and text are specified in it, so you’ll never be caught with your pants down. At the very least, you’ll have a white background and black text that’ll prompt you to change them to your chosen colours. Elbow room Paying attention to your typography is important, but it’s not just about making it look nice. Careful use of the line-height property can make your text more readable, which helps everyone, but is particularly helpful for those with dyslexia, who use screen magnification or simply find it uncomfortable to read lots of text online. When lines of text are too close together, it can cause the eye to skip down lines when reading, making it difficult to keep track of what you’re reading across. So, a bit of room is good. That said, when lines of text are too far apart, it can be just as bad, because the eye has to jump to find the next line. That not only breaks up the reading rhythm, but can make it much more difficult for those using Screen Magnification (especially at high levels of magnification) to find the beginning of the next line which follows on from the end of the line they’ve just read. Using a line height of between 1.2 and 1.6 times normal can improve readability, and using unit-less line heights help take care of any pesky browser calculation problems. For example: p { font-family: "Lucida Grande", Lucida, Verdana, Helvetica, sans-serif; font-size: 1em; line-height: 1.3; } or if you want to use the shorthand version: p { font: 1em/1.3 "Lucida Grande", Lucida, Verdana, Helvetica, sans-serif; } View some examples of different line-heights, based on default text size of 100%/1em. Further reading on Unitless line-heights from Eric Meyer. Transformers: Initial case in disguise Nobody wants to shout at their users, but there are some occasions when you might legitimately want to use uppercase on your site. Avoid screen-reader pronunciation weirdness (where, for example, CONTACT US would be read out as Contact U S, which is not the same thing – unless you really are offering your users the chance to contact the United States) caused by using uppercase by using title case for your text and using the often neglected text-transform property to fake uppercase. For example: .uppercase { text-transform: uppercase } Don’t overdo it though, as uppercase text is harder to read than normal text, not to mention the whole SHOUTING thing. Linky love When it comes to accessibility, keyboard only users (which includes those who use voice recognition software) who can see just fine are often forgotten about in favour of screen reader users. This Christmas, share the accessibility love and light up those links so all of your users can easily find their way around your site. The link outline AKA: the focus ring, or that dotted box that goes around links to show users where they are on the site. The techniques below are intended to supplement this, not take the place of it. You may think it’s ugly and want to get rid of it, especially since you’re going to the effort of tarting up your links. Don’t. Just don’t. The non-underlined underline If you listen to Jacob Nielsen, every link on your site should be underlined so users know it’s a link. You might disagree with him on this (I know I do), but if you are choosing to go with underlined links, in whatever state, then remove the default underline and replacing it with a border that’s a couple of pixels away from the text. The underline is still there, but it’s no longer cutting off the bottom of letters with descenders (e.g., g and y) which makes it easier to read. This is illustrated in Examples 1 and 2. You can modify the three lines of code below to suit your own colour and border style preferences, and add it to whichever link state you like. text-decoration: none; border-bottom: 1px #000 solid; padding-bottom: 2px; Standing out from the crowd Whatever way you choose to do it, you should be making sure your links stand out from the crowd of normal text which surrounds them when in their default state, and especially in their hover or focus states. A good way of doing this is to reverse the colours when on hover or focus. Well-focused Everyone knows that you can use the :hover pseudo class to change the look of a link when you mouse over it, but, somewhat ironically, people who can see and use a mouse are the group who least need this extra visual clue, since the cursor handily (sorry) changes from an arrow to a hand. So spare a thought for the non-pointing device users that visit your site and take the time to duplicate that hover look by using the :focus pseudo class. Of course, the internets being what they are, it’s not quite that simple, and predictably, Internet Explorer is the culprit once more with it’s frustrating lack of support for :focus. Instead it applies the :active pseudo class whenever an anchor has focus. What this means in practice is that if you want to make your links change on focus as well as on hover, you need to specify focus, hover and active. Even better, since the look and feel necessarily has to be the same for the last three states, you can combine them into one rule. So if you wanted to do a simple reverse of colours for a link, and put it together with the non-underline underlines from before, the code might look like this: a:link { background: #fff; color: #000; font-weight: bold; text-decoration: none; border-bottom: 1px #000 solid; padding-bottom: 2px; } a:visited { background: #fff; color: #800080; font-weight: bold; text-decoration: none; border-bottom: 1px #000 solid; padding-bottom: 2px; } a:focus, a:hover, a:active { background: #000; color: #fff; font-weight: bold; text-decoration: none; border-bottom: 1px #000 solid; padding-bottom: 2px; } Example 3 shows what this looks like in practice. Location, Location, Location To take this example to it’s natural conclusion, you can add an id of current (or something similar) in appropriate places in your navigation, specify a full set of link styles for current, and have a navigation which, at a glance, lets users know which page or section they’re currently in. Example navigation using location indicators. and the source code Conclusion All the examples here are intended to illustrate the concepts, and should not be taken as the absolute best way to format links or style navigation bars – that’s up to you and whatever visual design you’re using at the time. They’re also not the only things you should be doing to make your site accessible. Above all, remember that accessibility is for life, not just for Christmas. 2007 Ann McMeekin annmcmeekin 2007-12-13T00:00:00+00:00 https://24ways.org/2007/css-for-accessibility/ design
2 Levelling Up Hello, 24 ways. I’m Ashley and I sell property insurance. I’m interrupting your Christmas countdown with an article about rental property software and a guy, Pete, who selflessly encouraged me to build my first web app. It doesn’t sound at all festive, or — considering I’ve used both “insurance” and “rental property” — interesting, but do stick with me. There’s eggnog at the end. I run a property insurance business, Brokers Direct. It’s a small operation, but well established. We’ve been selling landlord insurance on the web for over thirteen years, for twelve of which we have provided our clients with third-party software for managing their rental property portfolios. Free. Of. Charge. It sounds like a sweet deal for our customers, but it isn’t. At least, not any more. The third-party software is victim to years of neglect by its vendor. Its questionable interface, garish visuals and, ahem, clip art icons have suffered from a lack of updates. While it was never a contender for software of the year, I’ve steadily grown too embarrassed to associate my business with it. The third-party rental property software we distributed I wanted to offer my customers a simple, clean and lightweight alternative. In an industry that’s dominated by dated and bloated software, it seemed only logical that I should build my own rental property tool. The long learning-to-code slog Learning a programming language is daunting, the source of my frustration stemming from a non-programming background. Generally, tutorials assume a degree of familiarity with programming, whether it be tools, conventions or basic skills. I had none and, at the time, there was nothing on the web really geared towards a novice. I reached the point where I genuinely thought I was just not cut out for coding. Surrendering to my feelings of self-doubt and frustration, I sourced a local Rails developer, Pete, to build it for me. Pete brought a pack of index cards to our meeting. Index cards that would represent each feature the rental property software would launch with. “OK,” he began. “We’ll need a user model, tenant model, authentication, tenant and property relationships…” A dozen index cards with a dozen features lined the coffee table in a grid-like format. Logical, comprehensible, achievable. Seeing the app laid out in a digestible manner made it seem surmountable. Maybe I could do this. “I’ve been trying to learn Rails…”, I piped up. I don’t know why I said it. I was fully prepared to hire Pete to do the hard work for me. But Pete, unprompted, gathered the index cards and neatly stacked them together, coasting them across the table towards me. “You should build this”. Pete, a full-time freelance developer at the time, was turning down a paying job in favour of encouraging me to learn to code. Looking back, I didn’t realise how significant this moment was. That evening, I took Pete’s index cards home to make a start on my app, slowly evolving each of the cards into a working feature. Building the app solo, I turned to Stack Overflow to solve the inevitable coding hurdles I encountered, as well as calling on a supportive Rails community. Whether they provided direct solutions to my programming woes, or simply planted a seed on how to solve a problem, I kept coding. Many months later, and after several more doubtful moments, Lodger was born. Property overview of my app, Lodger. If I can do it, so can you I misspent a lot of time building Twitter and blogging applications (apparently, all Rails tutorials centre around Twitter and blogging). If I could rewind and impart some advice to myself, this is what I’d say. There’s no magic formula “I haven’t quite grasped Rails routing. I should tackle another tutorial.” Making excuses — or procrastination — is something we are all guilty of. I was waiting for a programming book that would magically deposit a grasp of the entire Ruby syntax in my head. I kept buying books thinking each one would be the one where it all clicked. I now have a bookshelf full of Ruby material, all of which I’ve barely read, and none of which got me any closer to launching my web app. Put simply, there’s no magic formula. Break it down Whatever it is you want to build, break it down into digestible chunks. Taking Pete’s method as an example, having an index card represent an individual feature helped me tremendously. Tackle one at a time. Even if each feature takes you a month to build, and you have eight features to launch with, after eight months you’ll have your MVP. Remember, if you do nothing each day, it adds up to nothing. Have a tangible product to build I have a wonderful habit of writing down personal notes, usually to express my feelings at the time or to log an idea, only to uncover them months or years down the line, long after I forgot I had written them. I made a timely discovery while writing this article, discovering this gem while flicking through a battered Moleskine: “I don’t seem to be making good progress with learning Rails, but development still excites me. I should maybe stop doing tutorials and work towards building a specific app.” Having a real product to work on, like I did with Lodger, means you have something tangible to apply the techniques you are learning. I found this prevented me from flitting aimlessly between tutorials and books, which is an easy area to accidentally remain in. Team up If possible, team up with a designer and create something together. Designers are great at presenting features in a way you’d never have considered. You will learn a lot from making their designs come to life. Your homework for the holiday Despite having a web app under my belt, I am not a programmer. I tinker with code, piecing enough bits of it together to make something functional. And that’s OK! I’m not excusing sloppiness, but if we aimed for perfection every time, we’d never execute any of our ideas. As the holidays approach and you’ve exhausted yet another viewing of The Muppet Christmas Carol (or is that just my guilty pleasure at Christmas?), you may have time on your hands. Time to explore an idea you’ve been sitting on, but — plagued with procrastination and doubt — have yet to bring to life. This holiday, I am here to say to you what Pete said to me. You should build this. You don’t need to be the next Mark Zuckerberg or Larry Page. You just have to learn enough to get it done. PS: I lied about the eggnogg, but try capturing somebody’s attention when you tell them you sell property insurance! 2013 Ashley Baxter ashleybaxter 2013-12-06T00:00:00+00:00 https://24ways.org/2013/levelling-up/ business
272 Crafting the Front-end Much has been spoken and written recently about the virtues of craftsmanship in the context of web design and development. It seems that we as fabricators of the web are finally tiring of seeking out parallels between ourselves and architects, and are turning instead to the fabled specialist artisans. Identifying oneself as a craftsman or craftswoman (let’s just say craftsperson from here onward) will likely be a trend of early 2012. In this pre-emptive strike, I’d like to expound on this movement as I feel it pertains to front-end development, and encourage care and understanding of the true qualities of craftsmanship (craftspersonship). The core values I’ll begin by defining craftspersonship. What distinguishes a craftsperson from a technician? Dictionaries tend to define a craftsperson as one who possesses great skill in a chosen field. The badge of a craftsperson for me, though, is a very special label that should be revered and used sparingly, only where it is truly deserved. A genuine craftsperson encompasses a few other key traits, far beyond raw skill, each of which must be learned and mastered. A craftsperson has: An appreciation of good work, in both the work of others and their own. And not just good as in ‘hey, that’s pretty neat’, I mean a goodness like a shining purity – the kind of good that feels right when you see it. A belief in quality at every level: every facet of the craftsperson’s product is as crucial as any other, without exception, even those normally hidden from view. Vision: an ability to visualize their path ahead, pre-empting the obstacles that may be encountered to plan a route around them. A preference for simplicity: an almost Bauhausesque devotion to undecorated functionality, with no unjustifiable parts included. Sincerity: producing work that speaks directly to its purpose with flawless clarity. Only when you become a custodian of such values in your work can you consider calling yourself a craftsperson. Now let’s take a look at some steps we front-end developers can take on our journey of enlightenment toward craftspersonhood. Speaking of the craftsman’s journey, be sure to watch out for the video of The Standardistas’ stellar talk at the Build 2011 conference titled The Journey, which should be online sometime soon. Building your own toolbox My grandfather was a carpenter and trained as a young apprentice under a master. After observing and practising the many foundation theories, principles and techniques of carpentry, he was tasked with creating his own set of woodworking tools, which he would use and maintain throughout his career. By going through the process of having to create his own tools, he would be connected at the most direct level with every piece of wood he touched, his tools being his own creations and extensions of his own skilled hands. The depth of his knowledge of these tools must have surpassed the intricate as he fathered, used, cleaned and repaired them, day in and day out over many years. And so it should be, ideally, with all crafts. We must understand our tools right down to the most fundamental level. I firmly believe that a level of true craftsmanship cannot be reached while there exists a layer that remains not wholly understood between a creator and his canvas. Of course, our tools as front-end developers are somewhat more complex than those of other crafts – it may seem reasonable to require that a carpenter create his or her own set of chisels, but somewhat less so to ask a front-end developer to code their own CSS preprocessor, or design their own computer. However, it is still vitally important that you understand how your tools work. This is particularly critical when it comes to things like preprocessors, libraries and frameworks which aim to save you time by automating common processes and functions. For the most part, anything that saves you time is a Good Thing™ but it cannot be stressed enough that using tools like these in earnest should be avoided until you understand exactly what they are doing for you (and, to an extent, how they are doing it). In particular, you must understand any drawbacks to using your tools, and any shortcuts they may be taking on your behalf. I’m not suggesting that you steer clear of paid work until you’ve studied each of jQuery’s 9,266 lines of JavaScript source code but, all levity aside, it will further you on your journey to look at interesting or relevant bits of jQuery, and any other libraries you might want to use. Such libraries often directly link to corresponding sections of their source code on sites like GitHub from their official documentation. Better yet, they’re almost always written in high level languages (easy to read), so there’s no excuse not to don your pith helmet and go on something of an exploration. Any kind of tangential learning like this will drive you further toward becoming a true craftsperson, so keep an open mind and always be ready to step out of your comfort zone. Downtime and tool honing With any craft, it is essential to keep your tools in good condition, and a good idea to stay up-to-date with the latest equipment. This is especially true on the web, which, as we like to tell anyone who is still awake more than a minute after asking what it is that we do, advances at a phenomenal pace. A tool or technique that could be considered best practice this week might be the subject of haughty derision in a comment thread within six months. I have little doubt that you already spend a chunk of time each day keeping up with the latest material from our industry’s finest Interblogs and Twittertubes, but do you honestly put aside time to collect bookmarks and code snippets from things you read into a slowly evolving toolbox? At @media in 2009, Simon Collison delivered a candid talk on his ‘Ultimate Package’. Those of us who didn’t flee the room anticipating a newfound and unwelcome intimacy with the contents of his trousers were shown how he maintained his own toolkit – a collection of files and folders all set up and ready to go for a new project. By maintaining a toolkit in this way, he has consistency across projects and a dependable base upon which to learn and improve. The assembly and maintenance of such a personalized and familiar toolkit is probably as close as we will get to emulating the tool making stage of more traditional craft trades. Keep a master copy of your toolkit somewhere safe, making copies of it for new projects. When you learn of a way in which part of it can be improved, make changes to the master copy. Simplicity through modularity I believe that the user interfaces of all web applications should be thought of as being made up primarily of modular components. Modules in this context are patterns in design that appear repeatedly throughout the app. These can be small collections of elements, like a user profile summary box (profile picture, username, meta data), as well as atomic elements such as headings and list items. Well-crafted front-end architectures have the ability to support this kind of repeating pattern as modules, with as close to no repetition of CSS (or JavaScript) as possible, and as close to no variations in HTML between instances as possible. One of the most fundamental and well known tenets of software engineering is the DRY rule – don’t repeat yourself. It requires that “every piece of knowledge must have a single, unambiguous, authoritative representation within a system.” As craftspeople, we must hold this rule dear and apply it to the modules we have identified in our site designs. The moment you commit a second style definition for a module, the quality of your output (the front-end code) takes a huge hit. There should only ever be one base style definition for each distinct module or component. Keep these in a separate, sacred place in your CSS. I use a _modules.scss Sass include file, imported near the top of my main CSS files. Be sure, of course, to avoid making changes to this file lightly, as the smallest adjustment can affect multiple pages (hint: keep a structure list of which modules are used on which pages). Avoid the inevitable temptation to duplicate code late in the project. Sticking to this rule becomes more important the more complex the codebase becomes. If you can stick to this rule, using sensible class names and consistent HTML, you can reach a joyous, self-fulfilling plateau stage in each project where you are assembling each interface from your own set of carefully crafted building blocks. Old school markup Let’s take a step back. Before we fret about creating a divinely pure modular CSS framework, we need to know the site’s design and what it is made of. The best way to gain this knowledge is to go old school. Print out every comp, mockup, wireframe, sketch or whatever you have. If there are sections of pages that are hidden until some user action takes place, or if the page has multiple states, be sure that you have everything that could become visible to the user on paper. Once you have your wedge of paper designs, lay out all the pages on the floor, or stick them to the wall if you can, arranging them logically according to the site hierarchy, by user journey, or whatever guidelines make most sense to you. Once you have the site laid out before you, study it for a while, familiarizing yourself with every part of every interface. This will eliminate nasty surprises late in the project when you realize you’ve duplicated something, or left an interface on the drawing board altogether. Now that you know the site like it’s your best friend, get out your pens or pencils of choice and attack it. Mark it up like there’s no tomorrow. Pretend you’re a spy trying to identify communications from an enemy network hiding their messages in newspapers. Look for patterns and similarities, drawing circles around them. These are your modules. Start also highlighting the differences between each instance of these modules, working out which is the most basic or common type that will become the base definition from which all other representations are extended. This simple but empowering exercise will equip you for your task of actually crafting, instead of just building, the front-end. Without the knowledge gained from this kind of research phase, you will be blundering forward, improvising as best you can, but ultimately making quality-compromising mistakes that could have been avoided. For more on this theme, read Anna Debenham’s Front-end Style Guides which recommends a similar process, and the sublime idea of extending this into a guide to refer to during development and beyond. Design homogeneity Moving forward again, you now have your modules defined and things are looking good. I mentioned that many instances of these modules will carry minor differences. These differences must be given significant thinking time, and discussion time with your designer(s). It should be common knowledge by now that successful software projects are not the product of distinct design and build phases with little or no bidirectional feedback. The crucial nature of the designer-developer relationship has been covered in depth this year by Paul Robert Lloyd, and a joint effort from both teams throughout the project lifecycle is pivotal to your ability to craft and ship successful products. This relationship comes into play when you’re well into the development of the site, and you start noticing these differences between instances of modules (they’ll start to stand out very clearly to you and your carefully regimented modular CSS system). Before you start overriding your base styles, question the differences with the designer to work out why they exist. Perhaps they are required and are important to their context, but perhaps they were oversights from earlier design revisions, or simple mistakes. The craftsperson’s gland As you grow towards the levels of expertise and experience where you can proudly and honestly consider yourself a craftsperson, you will find that you steadily develop what initially feels like a kind of sixth sense. I think of it more as a new hormonal gland, secreting into your bloodstream a powerful messenger chemical that can either reward or punish your brain. This gland is connected directly to your core understanding of what good quality work looks and feels like, an understanding that itself improves with experience. This gland will make itself known to you in two ways. First, when you solve a problem in a beautifully elegant way with clean and unobtrusive code that looks good and just feels right, your craftsperson’s gland will ooze something delicious that makes your brain and soul glow from the inside out. You will beam triumphantly at the succinct lines of code on your computer display before bounding outside with a spring in your step to swim up glittering rainbows and kiss soft fluffy puppies. The second way that you may become aware of your craftsperson’s gland, though, is somewhat less pleasurable. In an alternate reality, your parallel self is faced with the same problem, but decides to take a shortcut and get around it by some dubious means – the kind of technical method that the words hack, kludge and bodge are reserved for. As soon as you have done this, or even as you are doing it, your craftsperson’s gland will damn well let you know that you took the wrong fork in the road. As your craftsperson’s gland begins to secrete a toxic pus, you will at first become entranced into a vacant stare at the monstrous mess you are considering unleashing upon your site’s visitors, before writhing in the horrible agony of an itch that can never be scratched, and a feeling of being coated with the devil’s own deep and penetrating filth that no shower will ever cleanse. Perhaps I exaggerate slightly, but it is no overstatement to suggest that you will find yourself being guided by proverbial angels and demons perched on opposite shoulders, or a whispering voice inside your head. If you harness this sense, sharpening it as if it were another tool in your kit and letting it guide or at least advise your decision making, you will transcend the rocky realm of random trial and error when faced with problems, and tend toward the right answers instinctively. This gland can also empower your ability to assess your own work, becoming a judge before whom all your work is cross-examined. A good craftsperson regularly takes a step back from their work, and questions every facet of their product for its precise alignment with their core values of quality and sincerity, and even the very necessity of each component. The wrapping By now, you may be thinking that I take this kind of thing far too seriously, but to terrify you further, I haven’t even shared the half of it. Hopefully, though, this gives you an idea of the kind of levels of professionalism and dedication that it should take to get you on your way to becoming a craftsperson. It’s a level of accomplishment and ability toward which we all should strive, both for our personal fulfilment and the betterment of the products we use daily. I look forward to seeing your finely crafted work throughout 2012. 2011 Ben Bodien benbodien 2011-12-24T00:00:00+00:00 https://24ways.org/2011/crafting-the-front-end/ process
209 Feeding the Audio Graph In 2004, I was given an iPod. I count this as one of the most intuitive pieces of technology I’ve ever owned. It wasn’t because of the the snazzy (colour!) menus or circular touchpad. I loved how smoothly it fitted into my life. I could plug in my headphones and listen to music while I was walking around town. Then when I got home, I could plug it into an amplifier and carry on listening there. There was no faff. It didn’t matter if I could find my favourite mix tape, or if my WiFi was flakey - it was all just there. Nowadays, when I’m trying to pair my phone with some Bluetooth speakers, or can’t find my USB-to-headphone jack, or even access any music because I don’t have cellular reception; I really miss this simplicity. The Web Audio API I think the Web Audio API feels kind of like my iPod did. It’s different from most browser APIs - rather than throwing around data, or updating DOM elements - you plug together a graph of audio nodes, which the browser uses to generate, process, and play sounds. The thing I like about it is that you can totally plug it into whatever you want, and it’ll mostly just work. So, let’s get started. First of all we want an audio source. <audio src="night-owl.mp3" controls /> (Song - Night Owl by Broke For Free) This totally works. However, it’s not using the Web Audio API, so we can’t access or modify the sound it makes. To hook this up to our audio graph, we can use an AudioSourceNode. This captures the sound from the element, and lets us connect to other nodes in a graph. const audioCtx = new AudioContext() const audio = document.querySelector('audio') const input = audioCtx.createAudioSourceNode(audio) input.connect(audioCtx.destination) Great. We’ve made something that looks and sounds exactly the same as it did before. Go us. Gain Let’s plug in a GainNode - this allows you to alter the amplitude (volume) of an an audio stream. We can hook this node up to an <input> element by setting the gain property of the node. (The syntax for this is kind of weird because it’s an AudioParam which has options to set values at precise intervals). const node = audioCtx.createGain() const input = document.querySelector('input') input.oninput = () => node.gain.value = parseFloat(input.value) input.connect(node) node.connect(audioCtx.destination) You can now see a range input, which can be dragged to update the state of our graph. This input could be any kind of element, so now you’ll be free to build the volume control of your dreams. There’s a number of nodes that let you modify/filter an audio stream in more interesting ways. Head over to the MDN Web Audio page for a list of them. Analysers Something else we can add to our graph is an AnalyserNode. This doesn’t modify the audio at all, but allows us to inspect the sounds that are flowing through it. We can put this into our graph between our AudioSourceNode and the GainNode. const analyser = audioCtx.createAnalyser() input.connect(analyser) analyser.connect(gain) gain.connect(audioCtx.destination) And now we have an analyser. We can access it from elsewhere to drive any kind of visuals. For instance, if we wanted to draw lines on a canvas we could totally do that: const waveform = new Uint8Array(analyser.fftSize) const frequencies = new Uint8Array(analyser.frequencyBinCount) const ctx = canvas.getContext('2d') const loop = () => { requestAnimationFrame(loop) analyser.getByteTimeDomainData(waveform) analyser.getByteFrequencyData(frequencies) ctx.beginPath() waveform.forEach((f, i) => ctx.lineTo(i, f)) ctx.lineTo(0,255) frequencies.forEach((f, i) => ctx.lineTo(i, 255-f)) ctx.stroke() } loop() You can see that we have two arrays of data available (I added colours for clarity): The waveform - the raw samples of the audio being played. The frequencies - a fourier transform of the audio passing through the node. What’s cool about this is that you’re not tied to any specific functionality of the Web Audio API. If it’s possible for you to update something with an array of numbers, then you can just apply it to the output of the analyser node. For instance, if we wanted to, we could definitely animate a list of emoji in time with our music. spans.forEach( (s, i) => s.style.transform = `scale(${1 + (frequencies[i]/100)})` ) 🔈🎤🎤🎤🎺🎷📯🎶🔊🎸🎺🎤🎸🎼🎷🎺🎻🎸🎻🎺🎸🎶🥁🎶🎵🎵🎷📯🎸🎹🎤🎷🎻🎷🔈🔊📯🎼🎤🎵🎼📯🥁🎻🎻🎤🔉🎵🎹🎸🎷🔉🔈🔉🎷🎶🔈🎸🎸🎻🎤🥁🎼📯🎸🎸🎼🎸🥁🎼🎶🎶🥁🎤🔊🎷🔊🔈🎺🔊🎻🎵🎻🎸🎵🎺🎤🎷🎸🎶🎼📯🔈🎺🎤🎵🎸🎸🔊🎶🎤🥁🎵🎹🎸🔈🎻🔉🥁🔉🎺🔊🎹🥁🎷📯🎷🎷🎤🎸🔉🎹🎷🎸🎺🎼🎤🎼🎶🎷🎤🎷📯📯🎻🎤🎷📯🎹🔈🎵🎹🎼🔊🔉🔉🔈🎶🎸🥁🎺🔈🎷🎵🔉🥁🎷🎹🎷🔊🎤🎤🔊🎤🎤🎹🎸🎹🔉🎷 Generating Audio So far, we’ve been using the <audio> element as a source of sound. There’s a few other sources of audio that we can use. We’ll look at the AudioBufferNode - which allows you to manually generate a sound sample, and then connect it to our graph. First we have to create an AudioBuffer, which holds our raw data, then we pass that to an AudioBufferNode which we can then treat just like our AudioSource node. This can get a bit boring, so we’ll use a helper method that makes it simpler to generate sounds. const generator = (audioCtx, target) => (seconds, fn) => { const { sampleRate } = audioCtx const buffer = audioCtx.createBuffer( 1, sampleRate * seconds, sampleRate ) const data = buffer.getChannelData(0) for (var i = 0; i < data.length; i++) { data[i] = fn(i / sampleRate, seconds) } return () => { const source = audioCtx.createBufferSource() source.buffer = audioBuffer source.connect(target || audioCtx.destination) source.start() } } const sound = generator(audioCtx, gain) Our wrapper will let us provide a function that maps time (in seconds) to a sample (between 1 and -1). This generates a waveform, like we saw before with the analyser node. For example, the following will generate 0.75 seconds of white noise at 20% volume. const noise = sound(0.75, t => Math.random() * 0.2) button.onclick = noise Noise Now we’ve got a noisy button! Handy. Rather than having a static set of audio nodes, each time we click the button, we add a new node to our graph. Although this feels inefficient, it’s not actually too bad - the browser can do a good job of cleaning up old nodes once they’ve played. An interesting property of defining sounds as functions is that we can combine multiple function to generate new sounds. So if we wanted to fade our noise in and out, we could write a higher order function that does that. const ease = fn => (t, s) => fn(t) * Math.sin((t / s) * Math.PI) const noise = sound(0.75, ease(t => Math.random() * 0.2)) ease(noise) And we can do more than just white noise - if we use Math.sin, we can generate some nice pure tones. // Math.sin with period of 0..1 const wave = v => Math.sin(Math.PI * 2 * v) const hz = f => t => wave(t * f) const _440hz = sound(0.75, ease(hz(440))) const _880hz = sound(0.75, ease(hz(880))) 440Hz 880Hz We can also make our functions more complex. Below we’re combining several frequencies to make a richer sounding tone. const harmony = f => [4, 3, 2, 1].reduce( (v, h, i) => (sin(f * h) * (i+1) ) + v ) const a440 = sound(0.75, ease(harmony(440))) 440Hz 880Hz Cool. We’re still not using any audio-specific functionality, so we can repurpose anything that does an operation on data. For example, we can use d3.js - usually used for interactive data visualisations - to generate a triangular waveform. const triangle = d3.scaleLinear() .domain([0, .5, 1]) .range([-1, 1, -1]) const wave = t => triangle(t % 1) const a440 = sound(0.75, ease(harmony(440))) 440Hz 880Hz It’s pretty interesting to play around with different functions. I’ve plonked everything in jsbin if you want to have a play yourself. A departure from best practice We’ve been generating our audio from scratch, but most of what we’ve looked at can be implemented by a series of native Web Audio nodes. This would be way performant (because it’s not happening on the main thread), and more flexible in some ways (because you can set timings dynamically whilst the note is playing). But we’re going to stay with this approach because it’s fun, and sometimes the fun thing to do might not technically be the best thing to do. Making a keyboard Having a button that makes a sound is totally great, but how about lots of buttons that make lots of sounds? Yup, totally greater-er. The first thing we need to know is the frequency of each note. I thought this would be awkward because pianos were invented more than 250 years before the Hz unit was defined, so surely there wouldn’t be a simple mapping between the two? const freq = note => 27.5 * Math.pow(2, (note - 21) / 12) This equation blows my mind; I’d never really figured how tightly music and maths fit together. When you see a chord or melody, you can directly map it back to a mathematical pattern. Our keyboard is actually an SVG picture of a keyboard, so we can traverse the elements of it and map each element to a sound generated by one of the functions that we came up with before. Array.from(svg.querySelector('rect')) .sort((a, b) => + a.x - b.x) .forEach((key, i) => key.addEventListener('touchstart', sound(0.75, ease(harmony(freq(i + 48)))) ) ) rect {stroke: #ddd;} rect:hover {opacity: 0.8; stroke: #000} Et voilà. We have a keyboard. What I like about this is that it’s completely pure - there’s no lookup tables or hardcoded attributes; we’ve just defined a mapping from SVG elements to the sound they should probably make. Doing better in the future As I mentioned before, this could be implemented more performantly with Web Audio nodes, or even better - use something like Tone.js to be performant for you. Web Audio has been around for a while, though we’re getting new challenges with immersive WebXR experiences, where spatial audio becomes really important. There’s also always support and API improvements (if you like AudioBufferNode, you’re going to love AudioWorklet) Conclusion And that’s about it. Web Audio isn’t some black box, you can easily link it with whatever framework, or UI that you’ve built (whether you should is an entirely different question). If anyone ever asks you “could you turn this SVG into a musical instrument?” you don’t have to stare blankly at them any more. (function(a,c){var b=a.createElement("script");if(!("noModule"in b)&&"on"+c in b){var d=!1;a.addEventListener(c,function(a){if(a.target===b)d=!0;else if(!a.target.hasAttribute("nomodule")||!d)return;a.preventDefault()},!0);b.type="module";b.src=".";a.head.appendChild(b);b.remove()}})(document,"beforeload"); 2017 Ben Foxall benfoxall 2017-12-17T00:00:00+00:00 https://24ways.org/2017/feeding-the-audio-graph/ code
109 Geotag Everywhere with Fire Eagle A note from the editors: Since this article was written Yahoo! has retired the Fire Eagle service. Location, they say, is everywhere. Everyone has one, all of the time. But on the web, it’s taken until this year to see the emergence of location in the applications we use and build. The possibilities are broad. Increasingly, mobile phones provide SDKs to approximate your location wherever you are, browser extensions such as Loki and Mozilla’s Geode provide browser-level APIs to establish your location from the proximity of wireless networks to your laptop. Yahoo’s Brickhouse group launched Fire Eagle, an ambitious location broker enabling people to take their location from any of these devices or sources, and provide it to a plethora of web services. It enables you to take the location information that only your iPhone knows about and use it anywhere on the web. That said, this is still a time of location as an emerging technology. Fire Eagle stores your location on the web (protected by application-specific access controls), but to try and give an idea of how useful and powerful your location can be — regardless of the services you use now — today’s 24ways is going to build a bookmarklet to call up your location on demand, in any web application. Location Support on the Web Over the past year, the number of applications implementing location features has increased dramatically. Plazes and Brightkite are both full featured social networks based around where you are, whilst Pownce rolled in Fire Eagle support to allow geotagging of all the content you post to their microblogging service. Dipity’s beautiful timeline shows for you moving from place to place and Six Apart’s activity stream for Movable Type started exposing your movements. The number of services that hook into Fire Eagle will increase as location awareness spreads through the developer community, but you can use your location on other sites indirectly too. Consider Flickr. Now world renowned for their incredible mapping and places features, geotagging on Flickr started out as a grassroots extension of regular tagging. That same technique can be used to start rolling geotagging in any publishing platform you come across, for any kind of content. Machine-tags (geo:lat= and geo:lon=) and the adr and geo microformats can be used to enhance anything you write with location information. A crash course in avian inflammability Fire Eagle is a location store. A broker between services and devices which provide location and those which consume it. It’s a switchboard that controls which pieces of your location different applications can see and use, and keeps hidden anything you want kept private. A blog widget that displays your current location in public can be restricted to display just your current city, whilst a service that provides you with a list of the nearest ATMs will operate better with a precise street address. Even if your iPhone tells Fire Eagle exactly where you are, consuming applications only see what you want them to see. That’s important for users to realise that they’re in control, but also important for application developers to remember that you cannot rely on having super-accurate information available all the time. You need to build location aware applications which degrade gracefully, because users will provide fuzzier information — either through choice, or through less accurate sources. Application specific permissions are controlled through an OAuth API. Each application has a unique key, used to request a second, user-specific key that permits access to that user’s information. You store that user key and it remains valid until such a time as the user revokes your application’s access. Unlike with passwords, these keys are unique per application, so revoking the access rights of one application doesn’t break all the others. Building your first Fire Eagle app; Geomarklet Fire Eagle’s developer documentation can take you through examples of writing simple applications using server side technologies (PHP, Python). Here, we’re going to write a client-side bookmarklet to make your location available in every site you use. It’s designed to fast-track the experience of having location available everywhere on web, and show you how that can be really handy. Hopefully, this will set you thinking about how location can enhance the new applications you build in 2009. An oddity of bookmarklets Bookmarklets (or ‘favlets’, for those of an MSIE persuasion) are a strange environment to program in. Critically, you have no persistent storage available. As such, using token-auth APIs in a static environment requires you to build you application in a slightly strange way; authing yourself in advance and then hardcoding the keys into your script. Get started Before you do anything else, go to http://fireeagle.com and log in, get set up if you need to and by all means take a look around. Take a look at the mobile updaters section of the application gallery and perhaps pick out an app that will update Fire Eagle from your phone or laptop. Once that’s done, you need to register for an application key in the developer section. Head straight to /developer/create and complete the form. Since you’re building a standalone application, choose ‘Auth for desktop applications’ (rather than web applications), and select that you’ll be ‘accessing location’, not updating. At the end of this process, you’ll have two application keys, a ‘Consumer Key’ and a ‘Consumer Secret’, which look like these: Consumer Key luKrM9U1pMnu Consumer Secret ZZl9YXXoJX5KLiKyVrMZffNEaBnxnd6M These keys combined allow your application to make requests to Fire Eagle. Next up, you need to auth yourself; granting your new application permission to use your location. Because bookmarklets don’t have local storage, you can’t integrate the auth process into the bookmarklet itself — it would have no way of storing the returned key. Instead, I’ve put together a simple web frontend through which you can auth with your application. Head to Auth me, Amadeus!, enter the application keys you just generated and hit ‘Authorize with Fire Eagle’. You’ll be taken to the Fire Eagle website, just as in regular Fire Eagle applications, and after granting access to your app, be redirected back to Amadeus which will provide you your user tokens. These tokens are used in subsequent requests to read your location. And, skip to the end… The process of building the bookmarklet, making requests to Fire Eagle, rendering it to the page and so forth follows, but if you’re the impatient type, you might like to try this out right now. Take your four API keys from above, and drag the following link to your Bookmarks Toolbar; it contains all the code described below. Before you can use it, you need to edit in your own API keys. Open your browser’s bookmark editor and where you find text like ‘YOUR_CONSUMER_KEY_HERE’, swap in the corresponding key you just generated. Get Location Bookmarklet Basics To start on the bookmarklet code, set out a basic JavaScript module-pattern structure: var Geomarklet = function() { return ({ callback: function(json) {}, run: function() {} }); }; Geomarklet.run(); Next we’ll add the keys obtained in the setup step, and also some basic Fire Eagle support objects: var Geomarklet = function() { var Keys = { consumer_key: 'IuKrJUHU1pMnu', consumer_secret: 'ZZl9YXXoJX5KLiKyVEERTfNEaBnxnd6M', user_token: 'xxxxxxxxxxxx', user_secret: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' }; var LocationDetail = { EXACT: 0, POSTAL: 1, NEIGHBORHOOD: 2, CITY: 3, REGION: 4, STATE: 5, COUNTRY: 6 }; var index_offset; return ({ callback: function(json) {}, run: function() {} }); }; Geomarklet.run(); The Location Hierarchy A successful Fire Eagle query returns an object called the ‘location hierarchy’. Depending on the level of detail shared, the index of a particular piece of information in the array will vary. The LocationDetail object maps the array indices of each level in the hierarchy to something comprehensible, whilst the index_offset variable is an adjustment based on the detail of the result returned. The location hierarchy object looks like this, providing a granular breakdown of a location, in human consumable and machine-friendly forms. "user": { "location_hierarchy": [{ "level": 0, "level_name": "exact", "name": "707 19th St, San Francisco, CA", "normal_name": "94123", "geometry": { "type": "Point", "coordinates": [ - 0.2347530752, 67.232323] }, "label": null, "best_guess": true, "id": , "located_at": "2008-12-18T00:49:58-08:00", "query": "q=707%2019th%20Street,%20Sf" }, { "level": 1, "level_name": "postal", "name": "San Francisco, CA 94114", "normal_name": "12345", "woeid": , "place_id": "", "geometry": { "type": "Polygon", "coordinates": [], "bbox": [] }, "label": null, "best_guess": false, "id": 59358791, "located_at": "2008-12-18T00:49:58-08:00" }, { "level": 2, "level_name": "neighborhood", "name": "The Mission, San Francisco, CA", "normal_name": "The Mission", "woeid": 23512048, "place_id": "Y12JWsKbApmnSQpbQg", "geometry": { "type": "Polygon", "coordinates": [], "bbox": [] }, "label": null, "best_guess": false, "id": 59358801, "located_at": "2008-12-18T00:49:58-08:00" }, } In this case the first object has a level of 0, so the index_offset is also 0. Prerequisites To query Fire Eagle we call in some existing libraries to handle the OAuth layer and the Fire Eagle API call. Your bookmarklet will need to add the following scripts into the page: The SHA1 encryption algorithm The OAuth wrapper An extension for the OAuth wrapper The Fire Eagle wrapper itself When the bookmarklet is first run, we’ll insert these scripts into the document. We’re also inserting a stylesheet to dress up the UI that will be generated. If you want to follow along any of the more mundane parts of the bookmarklet, you can download the full source code. Rendering This bookmarklet can be extended to support any formatting of your location you like, but for sake of example I’m going to build three common formatters that you’ll find useful for common location scenarios: Sites which already ask for your location; and in publishing systems that accept tags or HTML mark-up. All the rendering functions are items in a renderers object, so they can be iterated through easily, making it trivial to add new formatting functions as your find new use cases (just add another function to the object). var renderers = { geotag: function(user) { if(LocationDetail.EXACT !== index_offset) { return false; } else { var coords = user.location_hierarchy[LocationDetail.EXACT].geometry.coordinates; return "geo:lat=" + coords[0] + ", geo:lon=" + coords[1]; } }, city: function(user) { if(LocationDetail.CITY < index_offset) { return false; } else { return user.location_hierarchy[LocationDetail.CITY - index_offset].name; } } You should always fail gracefully, and in line with catering to users who choose not to share their location precisely, always check that the location has been returned at the level you require. Geotags are expected to be precise, so if an exact location is unavailable, returning false will tell the rendering aspect of the bookmarklet to ignore the function altogether. These first two are quite simple, geotag returns geo:lat=-0.2347530752, geo:lon=67.232323 and city returns San Francisco, CA. This final renderer creates a chunk of HTML using the adr and geo microformats, using all available aspects of the location hierarchy, and can be used to geotag any content you write on your blog or in comments: html: function(user) { var geostring = ''; var adrstring = ''; var adr = []; adr.push('<p class="adr">'); // city if(LocationDetail.CITY >= index_offset) { adr.push( '\n <span class="locality">' + user.location_hierarchy[LocationDetail.CITY-index_offset].normal_name + '</span>,' ); } // county if(LocationDetail.REGION >= index_offset) { adr.push( '\n <span class="region">' + user.location_hierarchy[LocationDetail.REGION-index_offset].normal_name + '</span>,' ); } // locality if(LocationDetail.STATE >= index_offset) { adr.push( '\n <span class="region">' + user.location_hierarchy[LocationDetail.STATE-index_offset].normal_name + '</span>,' ); } // country if(LocationDetail.COUNTRY >= index_offset) { adr.push( '\n <span class="country-name">' + user.location_hierarchy[LocationDetail.COUNTRY-index_offset].normal_name + '</span>' ); } // postal if(LocationDetail.POSTAL >= index_offset) { adr.push( '\n <span class="postal-code">' + user.location_hierarchy[LocationDetail.POSTAL-index_offset].normal_name + '</span>,' ); } adr.push('\n</p>\n'); adrstring = adr.join(''); if(LocationDetail.EXACT === index_offset) { var coords = user.location_hierarchy[LocationDetail.EXACT].geometry.coordinates; geostring = '<p class="geo">' +'\n <span class="latitude">' + coords[0] + '</span>;' + '\n <span class="longitude">' + coords[1] + '</span>\n</p>\n'; } return (adrstring + geostring); } Here we check the availability of every level of location and build it into the adr and geo patterns as appropriate. Just as for the geotag function, if there’s no exact location the geo markup won’t be returned. Finally, there’s a rendering method which creates a container for all this data, renders all the applicable location formats and then displays them in the page for a user to copy and paste. You can throw this together with DOM methods and some simple styling, or roll in some components from YUI or JQuery to handle drawing full featured overlays. You can see this simple implementation for rendering in the full source code. Make the call With a framework in place to render Fire Eagle’s location hierarchy, the only thing that remains is to actually request your location. Having already authed through Amadeus earlier, that’s as simple as instantiating the Fire Eagle JavaScript wrapper and making a single function call. It’s a big deal that whilst a lot of new technologies like OAuth add some complexity and require new knowledge to work with, APIs like Fire Eagle are really very simple indeed. return { run: function() { insert_prerequisites(); setTimeout( function() { var fe = new FireEagle( Keys.consumer_key, Keys.consumer_secret, Keys.user_token, Keys.user_secret ); var script = document.createElement('script'); script.type = 'text/javascript'; script.src = fe.getUserUrl( FireEagle.RESPONSE_FORMAT.json, 'Geomarklet.callback' ); document.body.appendChild(script); }, 2000 ); }, callback: function(json) { if(json.rsp && 'fail' == json.rsp.stat) { alert('Error ' + json.rsp.code + ": " + json.rsp.message); } else { index_offset = json.user.location_hierarchy[0].level; draw_selector(json); } } }; We first insert the prerequisite scripts required for the Fire Eagle request to function, and to prevent trying to instantiate the FireEagle object before it’s been loaded over the wire, the remaining instantiation and request is wrapped inside a setTimeout delay. We then create the request URL, referencing the Geomarklet.callback callback function and then append the script to the document body — allowing a cross-domain request. The callback itself is quite simple. Check for the presence and value of rsp.status to test for errors, and display them as required. If the request is successful set the index_offset — to adjust for the granularity of the location hierarchy — and then pass the object to the renderer. The result? When Geomarklet.run() is called, your location from Fire Eagle is read, and each renderer displayed on the page in an easily copy and pasteable form, ready to be used however you need. Deploy The final step is to convert this code into a long string for use as a bookmarklet. Easiest for Mac users is the JavaScript bundle in TextMate — choose Bundles: JavaScript: Copy as Bookmarklet to Clipboard. Then create a new ‘Get Location’ bookmark in your browser of choice and paste in. Those without TextMate can shrink their code down into a single line by first running their code through the JSLint tool (to ensure the code is free from errors and has all the required semi-colons) and then use a find-and-replace tool to remove line breaks from your code (or even run your code through JSMin to shrink it down). With the bookmarklet created and added to your bookmarks bar, you can now call up your location on any page at all. Get a feel for a web where your location is just another reliable part of the browsing experience. Where next? So, the Geomarklet you’ve been guided through is a pretty simple premise and pretty simple output. But from this base you can start to extend: Add code that will insert each of the location renderings directly into form fields, perhaps, or how about site-specific handlers to add your location tags into the correct form field in Wordpress or Tumblr? Paste in your current location to Google Maps? Or Flickr? Geomarklet gives you a base to start experimenting with location on your own pages and the sites you browse daily. The introduction of consumer accessible geo to the web is an adventure of discovery; not so much discovering new locations, but discovering location itself. 2008 Ben Ward benward 2008-12-21T00:00:00+00:00 https://24ways.org/2008/geotag-everywhere-with-fire-eagle/ code
85 Starting Your Project on the Right Foot (and Keeping It There) I’m not sure if anything is as terrifying as beginning a new design project. I often spend hours trying to find the best initial footing in a design, so I’ve been working hard to improve my process, particularly for the earliest stages of a project. I want to smooth out the bumps that disrupt my creative momentum and focus on the emotional highs and lows I experience, and then try to minimize the lows and ride the highs as long as possible. Design is often a struggle broken up by blissful moments of creative clarity that provide valuable force to move your work forward. Momentum is a powerful tool in creative work, and it’s something we don’t always maximize when we’re working because of the hectic nature of our field. Obviously, every designer is going to have a different process, but I thought I’d share some of the methods I’ve begun to adopt. I hope this will spark a conversation among designers who are interested in looking at process in a new way. Jump-starting a project I cannot overstate the importance of immersing yourself in design and collecting ample amounts of inspiration when beginning a project. I make it a daily practice to visit a handful of sites (Dribbble, Graphic Exchange, Web Creme, siteInspire, Designspiration, and others) and save any examples of design that I like. I then sort them into general categories (publication design, illustration, typography, web design, and so on). Enjoying a bit of fresh design every day helps me absorb it and analyze why it’s effective instead of just imitating it.  Many designers are afraid to look at too much design for fear that they’ll be tempted to copy it, but I feel a steady influx of design inspiration reduces that possibility. You’re much more likely to take the easy way out and rip off a design if you’re scrambling for inspiration after getting stuck. If you are immersed in design from a variety of mediums, you’ll engage your creative brain on multiple levels and have an easier time creating something unique for your project. Looking at good design will not make you a good designer but it will make you a better designer. Design is design Try not to limit your visual research to the medium you’re working in. Websites, books, posters and packaging all have their own unique limitations and challenges, and any one of those characteristics could be useful to you. Posters need to grab the viewer and pass on a small tidbit of information; packaging needs to encourage physical interaction; and websites need to encourage exploration. If you know the challenges you’ll be facing, you will know where to look for design that tackles those same problems. I find it refreshing to look at design from the turn of the nineteenth century, when type was laid out on objects without thought to aesthetics. Many vintage packages break all sorts of modern design rules, and looking at that kind of work is a great way to spark your creativity. Pulling yourself out of the box and away from the rules of what you’re working on can reveal solutions that are innovative and unique. After a little finessing, the warning label text from a 1940s hazardous chemical box from could have the exact type and icon arrangement you need for your project. There’s a massive pool of design to pull from that doesn’t have the limitations the web has, and exploring those design worlds will help you grow your own repertoire. If all else fails, start with the footer The very beginning of a project is the most frustrating point in a project for me. I’m trying to figure out typeface combinations, colors and the overall voice of the design, and until I find the right solutions, I’m a wreck. I’ve found often that my frustration stems from trying to solve too many problems at once. The beginning of a project has a lot of moving targets, nearly endless possible solutions, and constantly changing variables. You’ll knock out one problem only to discover your solution doesn’t jive with something you worked out earlier — you end up designing in circles. If you find yourself getting stuck at the beginning of a website design, try working out one specific element of the site and see what emerges. I’m going to recommend the footer. Why? Footers can easily be ignored in a design or become a dumping ground for items that couldn’t be worked into the main layout. But, at the start of most projects, the minimum content requirements for the footer are usually established. There needs to be a certain number of links, social media buttons, copyright details, a search bar, and so on. It’s a self-contained item within the design that has a specific purpose, and that’s a great element to focus on when you’re stuck in a design. Colors, typefaces, link styles, input fields and buttons can all be sketched out from just the footer. It’s a very flexible element that can be as prominent or subtle as you want, and it’s a solid starting point for setting the tone and style of a site. Save the details Designers love details. I love details. But don’t let nitpicking early on in your process kill your creative momentum. Design is an emotional process, and being frustrated or defeated by a tricky problem or a graphical detail you just can’t nail down can deflate your creative energies. If you hit a roadblock, set it aside and tackle another piece of the project. As you spend time engaged in a design, the style you develop will evolve according to the needs of the content, and you might arrive naturally at a solution that will work perfectly for the problem that had you stuck before.  If I find myself working on one particular element for more than a half an hour without any clear movement, I shelve it. Designers often wear their obsessive detail-oriented tendencies as a badge of honor, but there’s a difference between making the design better and wasting time. If you’ve spent hours nudging elements around pixel by pixel and can’t settle on something, it probably means what you’re doing isn’t making a huge improvement on the design. Don’t be afraid to let it lie and come at it again with fresh eyes. You will be better equipped to tackle the finer points of a project once you’ve got the broad strokes defined. Have a plan when you start and stop designing We all know that creativity isn’t something you can turn on effortlessly, and it’s easy to forget the emotional process that goes along with design. If you leave a project in a place of frustration, it’s going to stay with you in your free time and affect you negatively, like a dark cloud of impending disaster. Try to end each design session with a victory, a small bit of definable progress that you can take with you in your downtime. Even something as small as finding the right opacity for the interior shadow on the search bar in the header of the site is a win. Likewise, when you return to a project after a break, it can be difficult to get the ball rolling on the design again if you set it down without a clear path for the next steps. I find that I work on details best when I’m returning from downtime, when I’m fresh and re-energized and ready to dig in again. Try to pick out at least one element you’d like to fine-tune when you are winding down in a design session and use it to kick-start your next session. Content is king I would argue there is nothing more crucial to the success of a design than having the content defined from the outset. Designing without content is similar to designing without an audience, and designing with vague ideas of content types and character limits is going to result in a muted design that doesn’t reach its full potential. Images and language go hand in hand with design, and can take a design from functional to outstanding if you have them available from the outset. We don’t always have the luxury of having content to build a design around, but fight for it whenever you can. For example, if the site you are designing is full of technical jargon, your paragraphs might need a longer line length to accommodate the longer words being used.  Often, working with content will lead to design solutions you wouldn’t have come to otherwise. Design speaks to content, and content speaks to design. Lorem ipsum doesn’t speak to anyone (unless you know Latin, in which case, congratulations!). Every project has its own set of needs, and every designer has his or her own method of working. There’s obviously no perfect process to design, and being dogmatic about process can be just as harmful as not having one. Exposing yourself to new design and new ways of designing is an easy way to test your skills and grow. When things are hard and you can’t get any momentum going on a design, this is when your skill set is truly challenged. We all hope to get wonderful projects with great assets and ample creative possibilities, but you won’t always be so blessed, and this is when the quality of your process is really going to shine. 2012 Bethany Heck bethanyheck 2012-12-02T00:00:00+00:00 https://24ways.org/2012/starting-your-project-on-the-right-foot/ process
3 Project Hubs: A Home Base for Design Projects SCENE: A design review meeting. Laptop screens. Coffee cups. Project manager: Hey, did you get my email with the assets we’ll be discussing? Client: I got an email from you, but it looks like there’s no attachment. PM: Whoops! OK. I’m resending the files with the attachments. Check again? Client: OK, I see them. It’s homepage_v3_brian-edits_FINAL_for-review.pdf, right? PM: Yeah, that’s the one. Client: OK, hang on, Bill’s going to print them out. (3-minute pause. Small talk ensues.) Client: Alright, Bill’s back. We’re good to start. Brian: Oh, actually those homepage edits we talked about last time are in the homepage_v4_brian_FINAL_v2.pdf document that I posted to Basecamp earlier today. Client: Oh, OK. What message thread was that in? Brian: Uh, I’m pretty sure it’s in “Homepage Edits and Holiday Schedule.” Client: Alright, I see them. Bill’s going back to the printer. Hang on a sec… This is only a slightly exaggerated version of my experience in design review meetings. The design project dance is a sloppy one. It involves a slew of email attachments, PDFs, PSDs, revisions, GitHub repos, staging environments, and more. And while tools like Basecamp can help manage all these moving parts, it can still be incredibly challenging to extract only the important bits, juggle deliverables, and see how your project is progressing. Enter project hubs. Project hubs A project hub consolidates all the key design and development materials onto a single webpage presented in reverse chronological order. The timeline lives online (either publicly available or password protected), so that everyone involved in the team has easy access to it. A project hub. I was introduced to project hubs after seeing Dan Mall’s open redesign of Reading Is Fundamental. Thankfully, I had a chance to work with Dan on two projects where I got to see firsthand how beneficial a project hub can be. Here’s what makes a project hub great: Serves as a centralized home base for the project Trains clients and teams to decide in the browser Easily and visually view project’s progress Provides an archive for project artifacts A home base Your clients and colleagues can expect to get the latest and greatest updates to your project when visiting the project hub, the same way you’d expect to get the latest information on a requested topic when you visit a Wikipedia page. That’s the beauty of URIs that don’t change. Creating a project hub reduces a ton of email volley nonsense, and eliminates the need to produce files and directories with staggeringly ridiculous names like design/12.13.13/team/brian/for_review/_FINAL/styletile_121313_brian-edits-final_v2_FINAL.pdf. The team can simply visit the project hub’s URL and click the link to whatever artifact they need. Need to make an update? Simply update the link on the project hub. No more email tango and silly file names. Deciding in the browser Let’s change the phrase “designing in the browser” to “deciding in the browser.” Dan Mall We make websites, but all too often we find ourselves looking at web design artifacts in abstractions. We email PDFs to each other, glance at mockup JPGs on our desktops, and of course kill trees in order to print out designs so that we can scribble in the margins. All of these practices subtly take everyone further and further away from the design’s eventual final resting place: the browser. Because a project hub is just a simple webpage, reviewing designs is as easy as clicking some links, which keep your clients and teams in the browser. You can keep people in the browser with yet another clever trick from the wily Dan Mall: instead of sending clients PDFs or JPGs, he created a simple webpage and tossed his static visuals into the template (you can view an example here). This forces clients to review web design work in the browser rather than launching a PDF viewer or Preview. Now this all might sound trivial to you (“Of course my client knows that we’re designing a website!”), but keeping the design artifacts in the browser subconsciously helps remind everyone of the medium for which you’re designing, which helps everyone focus on the right aspects of the design and have the right conversations. Progress over time When you’re in the trenches, it’s often hard to visualize how a project is progressing. Tools like Basecamp include discussions, files, to-dos, and more, which are all great tools but also make things a bit noisy. Project hubs provide you and your clients a quick and easy way to see at a glance how things are coming along. Teams can rest assured they’re viewing the most current versions of designs, and managers can share progress with stakeholders simply by providing a link to the project hub. Over time, a project hub becomes an easily accessible archive of all the design decisions, which makes it easy to compare and contrast different versions of designs and prototypes. Setting up a project hub Setting up your own project hub is pretty simple. Simply create a webpage with some basic styles and branding. I’ve created a project hub template that’s available on GitHub if you want a jump-start. Publish the webpage to a URL somewhere that makes sense (we’ve found that a subdomain of your site works quite well) and share it with everyone involved in the project. Bookmark it. Let everyone know that this is where design updates will be shared, and that they can always come back to the project hub to track the project’s progress. When it comes time to share new updates, simply add a new node to the timeline and republish the webpage. Simple FTPing works just fine, but it might make sense to keep track of changes using version control. Our project hub for our open redesign of the Pittsburgh Food Bank is managed on GitHub, which means that I can make edits to the hub right from GitHub. Thanks to the magical wizardry of webhooks, I can automatically deploy the project hub so that everything stays in sync. That’s the fancy-pants way to do it, and is certainly not a requirement. As long as you’re able to easily make edits and keep your project hub up to date, you’re good to go. So that’s the hubbub Project hubs can help tame the chaos of the design process by providing a home base for all key design and development materials. Keep the design artifacts in the browser and give clients and colleagues quick insight into your project’s progress. Happy hubbing! 2013 Brad Frost bradfrost 2013-12-17T00:00:00+00:00 https://24ways.org/2013/project-hubs/ process
83 Cut Copy Paste Long before I got into this design thing, I was heavily into making my own music inspired by the likes of Coldcut and Steinski. I would scour local second-hand record shops in search of obscure beats, loops and bits of dialogue in the hope of finding that killer sample I could then splice together with other things to make a huge hit that everyone would love. While it did eventually lead to a record contract and getting to release a few 12″ singles, ultimately I knew I’d have to look for something else to pay the bills. I may not make my own records any more, but the approach I took back then – finding (even stealing) things, cutting and pasting them into interesting combinations – is still at the centre of how I work, only these days it’s pretty much bits of code rather than bits of vinyl. Over the years I’ve stored these little bits of code (some I’ve found, some I’ve created myself) in Evernote, ready to be dialled up whenever I need them. So when Drew got in touch and asked if I’d like to do something for this year’s 24 ways I thought it might be kind of cool to share with you a few of these snippets that I find really useful. Think of these as a kind of coding mix tape; but remember – don’t just copy as is: play around, combine and remix them into other wonderful things. Some of this stuff is dirty; some of it will make hardcore programmers feel ill. For those people, remember this – while you were complaining about the syntax, I made something. Create unique colours Let’s start right away with something I stole. Well, actually it was given away at the time by Matt Biddulph who was then at Dopplr before Nokia destroyed it. Imagine you have thousands of words and you want to assign each one a unique colour. Well, Matt came up with a crazily simple but effective way to do that using an MD5 hash. Just encode said word using an MD5 hash, then take the first six characters of the string you get back to create a hexadecimal colour representation. I can’t guarantee that it will be a harmonious colour palette, but it’s still really useful. The thing I love the most about this technique is the left-field thinking of using an encryption system to create colours! Here’s an example using JavaScript: // requires the MD5 library available at http://pajhome.org.uk/crypt/md5 function MD5Hex(str){ result = MD5.hex(str).substring(0, 6); return result; } Make something breathe using a sine wave I never paid attention in school, especially during double maths. As a matter of fact, the only time I received corporal punishment – several strokes of the ruler – was in maths class. Anyway, if they had shown me then how beautiful mathematics actually is, I might have paid more attention. Here’s a little example of how a sine wave can be used to make something appear to breathe. I recently used this on an Arduino project where an LED ring surrounding a button would gently breathe. Because of that it felt much more inviting. I love mathematics. for(int i = 0; i<360; i++){ float rad = DEG_TO_RAD * i; int sinOut = constrain((sin(rad) * 128) + 128, 0, 255); analogWrite(LED, sinOut); delay(10); } Snap position to grid This is so elegant I love it, and it was shown to me by Gary Burgess, or Boom Boom as myself and others like to call him. It snaps a position, in this case the X-position, to a grid. Just define your grid size (say, twenty pixels) and you’re good. snappedXpos = floor( xPos / gridSize) * gridSize; Calculate the distance between two objects For me, interaction design is about the relationship between two objects: you and another object; you and another person; or simply one object to another. How close these two things are to each other can be a handy thing to know, allowing you to react to that information within your design. Here’s how to calculate the distance between two objects in a 2-D plane: deltaX = round(p2.x-p1.x); deltaY = round(p2.y-p1.y); diff = round(sqrt((deltaX*deltaX)+(deltaY*deltaY))); Find the X- and Y-position between two objects What if you have two objects and you want to place something in-between them? A little bit of interruption and disruption can be a good thing. This small piece of code will allow you to place an object in-between two other objects: // set the position: 0.5 = half-way float position = 0.5; float x = x1 + (x2 - x1) *position; float y = y1 + (y2 - y1) *position; Distribute objects equally around a circle More fun with maths, this time adding cosine to our friend sine. Let’s say you want to create a circular navigation of arbitrary elements (yeah, Jakob, you heard), or you want to place images around a circle. Well, this piece of code will do just that. You can adjust the size of the circle by changing the distance variable and alter the number of objects with the numberOfObjects variable. Example below is for use in Processing. // Example for Processing available for free download at processing.org void setup() { size(800,800); int numberOfObjects = 12; int distance = 100; float inc = (TWO_PI)/numberOfObjects; float x,y; float a = 0; for (int i=0; i < numberOfObjects; i++) { x = (width/2) + sin(a)*distance; y = (height/2) + cos(a)*distance; ellipse(x,y,10,10); a += inc; } } Use modulus to make a grid The modulus operator, represented by %, returns the remainder of a division. Fallen into a coma yet? Hold on a minute – this seemingly simple function is very powerful in lots of ways. At a simple level, you can use it to determine if a number is odd or even, great for creating alternate row colours in a table for instance: boolean checkForEven(numberToCheck) { if (numberToCheck % 2 == 0) return true; } else { return false; } } That’s all well and good, but here’s a use of modulus that might very well blow your mind. Construct a grid with only a few lines of code. Again the example is in Processing but can easily be ported to any other language. void setup() { size(600,600); int numItems = 120; int numOfColumns = 12; int xSpacing = 40; int ySpacing = 40; int totalWidth = xSpacing*numOfColumns; for (int i=0; i < numItems; i++) { ellipse(floor((i*xSpacing)%totalWidth),floor((i*xSpacing)/totalWidth)*ySpacing,10,10); } } Not all the bits of code I keep around are for actual graphical output. I also have things that are very utilitarian, but which I still consider part of the design process. Here’s a couple of things that I’ve found really handy lately in my design workflow. They may be a little specific, but I hope they demonstrate that it’s not about working harder, it’s about working smarter. Merge CSV files into one file Recently, I’ve had to work with huge – about 1GB – CSV text files that I then needed to combine into one master CSV file so I could then process the data. Opening up each text file and then copying and pasting just seemed really dumb, not to mention slow, so I thought there must be a better way. After some Googling I found this command line script that would combine .txt files into one file and add a new line after each: awk 1 *.txt > finalfile.txt But that wasn’t what I was ideally after. I wanted to merge the CSV files, keeping the first row of the first file (the column headings) and then ignore the first row of subsequent files. Sure enough I found the answer after some Googling and it worked like a charm. Apologies to the original author but I can’t remember where I found it, but you, sir or madam, are awesome. Save this as a shell script: FIRST= for FILE in *.csv do exec 5<"$FILE" # Open file read LINE <&5 # Read first line [ -z "$FIRST" ] && echo "$LINE" # Print it only from first file FIRST="no" cat <&5 # Print the rest directly to standard output exec 5<&- # Close file # Redirect stdout for this section into file.out done > file.out Create a symbolic link to another file or folder Oftentimes, I’ll find myself hunting through a load of directories to load a file to be processed, like a CSV file. Use a symbolic link (in the Terminal) to place a link on your desktop or wherever is most convenient and it’ll save you loads of time. Especially great if you’re going through a Java file dialogue box in Processing or something that doesn’t allow the normal Mac dialog box or aliases. cd /DirectoryYouWantShortcutToLiveIn ln -s /Directory/You/Want/ShortcutTo/ TheShortcut You can do it, in the mix I hope you’ve found some of the above useful and that they’ve inspired a few ideas here and there. Feel free to tell me better ways of doing things or offer up any other handy pieces of code. Most of all though, collect, remix and combine the things you discover to make lovely new things. 2012 Brendan Dawes brendandawes 2012-12-17T00:00:00+00:00 https://24ways.org/2012/cut-copy-paste/ code
156 Mobile 2.0 Thinking 2.0 As web geeks, we have a thick skin towards jargon. We all know that “Web 2.0” has been done to death. At Blue Flavor we even have a jargon bucket to penalize those who utter such painfully overused jargon with a cash deposit. But Web 2.0 is a term that has lodged itself into the conscience of the masses. This is actually a good thing. The 2.0 suffix was able to succinctly summarize all that was wrong with the Web during the dot-com era as well as the next evolution of an evolving media. While the core technologies actually stayed basically the same, the principles, concepts, interactions and contexts were radically different. With that in mind, this Christmas I want to introduce to you the concept of Mobile 2.0. While not exactly a new concept in the mobile community, it is relatively unknown in the web community. And since the foundation of Mobile 2.0 is the web, I figured it was about time for you to get to know each other. It’s the Carriers’ world. We just live in it. Before getting into Mobile 2.0, I thought first I should introduce you to its older brother. You know the kind, the kid with emotional problems that likes to beat up on you and your friends for absolutely no reason. That is the mobile of today. The mobile ecosystem is a very complicated space often and incorrectly compared to the Web. If the Web was a freewheeling hippie — believing in freedom of information and the unity of man through communities — then Mobile is the cutthroat capitalist — out to pillage and plunder for the sake of the almighty dollar. Where the Web is relatively easy to publish to and ultimately make a buck, Mobile is wrought with layers of complexity, politics and obstacles. I can think of no better way to summarize these challenges than the testimony of Jason Devitt to the United States Congress in what is now being referred to as the “iPhone Hearing.” Jason is the co-founder and CEO of SkyDeck a new wireless startup and former CEO of Vindigo an early pioneer in mobile content. As Jason points out, the mobile ecosystem is a closed door environment controlled by the carriers, forcing the independent publisher to compete or succumb to the will of corporate behemoths. But that is all about to change. Introducing Mobile 2.0 Mobile 2.0 is term used by the mobile community to describe the current revolution happening in mobile. It describes the convergence of mobile and web services, adding portability, ubiquitous connectivity and location-aware services to add physical context to information found on the Web. It’s an important term that looks toward the future. Allowing us to imagine the possibilities that mobile technology has long promised but has yet to deliver. It imagines a world where developers can publish mobile content without the current constraints of the mobile ecosystem. Like the transition from Web 1.0 to 2.0, it signifies the shift away from corporate or brand-centered experiences to user-centered experiences. A focus on richer interactions, driven by user goals. Moving away from proprietary technologies to more open and standard ones, more akin to the Web. And most importantly (from our perspective as web geeks) a shift away from kludgy one-off mobile applications toward using the Web as a platform for content and services. This means the world of the Web and the world of Mobile are coming together faster than you can say ARPU (Average Revenue Per User, a staple mobile term to you webbies). And this couldn’t come at a better time. The importance of understanding and addressing user context is quickly becoming a crucial consideration to every interactive experience as the number of ways we access information on the Web increases. Mobile enables the power of the Web, the collective information of millions of people, inherit payment channels and access to just about every other mass media to literally be overlaid on top of the physical world, in context to the person viewing it. Anyone who can’t imagine how the influence of mobile technology can’t transform how we perform even the simplest of daily tasks needs to get away from their desktop and see the new evolution of information. The Instigators But what will make Mobile 2.0 move from idillic concept to a hardened market reality in 2008 will be four key technologies. Its my guess that you know each them already. 1. Opera Opera is like the little train that could. They have been a driving force on moving the Web as we know it on to mobile handsets. Opera technology has proven itself to be highly adaptable, finding itself preloaded on over 40 million handsets, available on televisions sets through Nintendo Wii or via the Nintendo DS. 2. WebKit Many were surprised when Apple chose to use KHTML instead of Gecko (the guts of Firefox) to power their Safari rendering engine. But WebKit has quickly evolved to be a powerful and flexible browser in the mobile context. WebKit has been in Nokia smartphones for a few years now, is the technology behind Mobile Safari in the iPhone and the iPod Touch and is the default web technology in Google’s open mobile platform effort, Android. 3. The iPhone The iPhone has finally brought the concepts and principles of Mobile 2.0 into the forefront of consumers minds and therefore developers’ minds as well. Over 500 web applications have been written specifically for the iPhone since its launch. It’s completely unheard of to see so many applications built for the mobile context in such a short period of time. 4. CSS & Javascript Web 2.0 could not exist without the rich interactions offered by CSS and Javascript, and Mobile 2.0 is no different. CSS and Javascript support across multiple phones historically has been, well… to put it positively… utter crap. Javascript finally allows developers to create interesting interactions that support user goals and the mobile context. Specially, AJAX allows us to finally shed the days of bloated Java applications and focus on portable and flexible web applications. While CSS — namely CSS3 — allows us to create designs that are as beautiful as they are economical with bandwidth and load times. With Leaflets, a collection of iPhone optimized web apps we created, we heavily relied on CSS3 to cache and reuse design elements over and over, minimizing download times while providing an elegant and user-centered design. In Conclusion It is the combination of all these instigators that is significantly decreasing the bar to mobile publishing. The market as Jason Devitt describes it, will begin to fade into the background. And maybe the world of mobile will finally start looking more like the Web that we all know and love. So after the merriment and celebration of the holiday is over and you look toward the new year to refresh and renew, I hope that you take a seriously consider the mobile medium. By this time next year, it is predicted that one-third of humanity will be using mobile devices to access the Web. 2007 Brian Fling brianfling 2007-12-21T00:00:00+00:00 https://24ways.org/2007/mobile-2-0/ business
150 A Gift Idea For Your Users: Respect, Yo If, indeed, it is the thought that counts, maybe we should pledge to make more thoughtful design decisions. In addition to wowing people who use the Web sites we build with novel features, nuanced aesthetics and the new new thing, maybe we should also thread some subtle things throughout our work that let folks know: hey, I’m feeling ya. We’re simpatico. I hear you loud and clear. It’s not just holiday spirit that moves me to talk this way. As good as people are, we need more than the horizon of karma to overcome that invisible demon, inertia. Makers of the Web, respectful design practices aren’t just the right thing, they are good for business. Even if your site is the one and only place to get experience x, y or zed, you don’t rub someone’s face in it. You keep it free flowing, you honor the possible back and forth of a healthy transaction, you are Johnny Appleseed with the humane design cues. You make it clear that you are in it for the long haul. A peek back: Think back to what search (and strategy) was like before Google launched a super clean page with “I’m Feeling Lucky” button. Aggregation was the order of the day (just go back and review all the ‘strategic alliances’ that were announced daily.) Yet the GOOG comes along with this zen layout (nope, we’re not going to try to make you look at one of our media properties) and a bold, brash, teleport-me-straight-to-the-first-search-result button. It could have been titled “We’re Feeling Cocky”. These were radical design decisions that reset how people thought about search services. Oh, you mean I can just find what I want and get on with it? It’s maybe even more impressive today, after the GOOG has figured out how to monetize attention better than anyone. “I’m Feeling Lucky” is still there. No doubt, it costs the company millions. But by leaving a little money on the table, they keep the basic bargain they started to strike in 1997. We’re going to get you where you want to go as quickly as possible. Where are the places we might make the same kind of impact in our work? Here are a few ideas: Respect People’s Time As more services become more integrated with our lives, this will only become more important. How can you make it clear that you respect the time a user has granted you? User-Oriented Defaults Default design may be the psionic tool in your belt. Unseen, yet pow-er-ful. Look at your defaults. Who are they set up to benefit? Are you depending on users not checking off boxes so you can feel ok about sending them email they really don’t want? Are you depending on users not checking off boxes so you tilt privacy values in ways most beneficial for your SERPs? Are you making it a little too easy for 3rd party applications to run buckwild through your system? There’s being right and then there’s being awesome. Design to the spirit of the agreement and not the letter. See this? Make sure that’s really the experience you think people want. Whenever I see a service that defaults to not opting me in their newsletter just because I bought a t shirt from them, you can be sure that I trust them that much more. And they are likely to see me again. Reduce, Reuse It’s likely that people using your service will have data and profile credentials elsewhere. You should really think hard about how you can let them repurpose some of that work within your system. Can you let them reduce the number of logins/passwords they have to manage by supporting OpenID? Can you let them reuse profile information from another service by slurping in (or even subscribing) to hCards? Can you give them a leg up by reusing a friends list they make available to you? (Note: please avoid the anti-pattern of inviting your user to upload all her credential data from 3rd party sites into your system.) This is a much larger issue, and if you’d like to get involved, have a look at the wiki, and dive in. Make it simple to leave Oh, this drives me bonkers. Again, the more simple you make it to increase or decrease involvement in your site, or to just opt-out altogether, the better. This example from Basecamp is instructive: At a glance, I can see what the implications are of choosing a different type of account. I can also move between account levels with one click. Finally, I can cancel the service easily. No hoop jumping. Also, it should be simple for users to take data with them or delete it. Let Them Have Fun Don’t overlook opportunities for pleasure. Even the most mundane tasks can be made more enjoyable. Check out one of my favorite pieces of interaction design from this past year: Holy knob fiddling, Batman! What a great way to get people to play with preference settings: an equalizer metaphor. Those of a certain age will recall how fun it was to make patterns with your uncle’s stereo EQ. I think this is a delightful way to encourage people to optimize their own experience of the news feed feature. Given the killer nature of this feature, it was important for Facebook to make it easy to fine tune. I’d also point you to Flickr’s Talk Like A Pirate Day Easter egg as another example of design that delights. What a huge amount of work for a one-off, totally optional way to experience the site. And what fun. And how true to its brand persona. Brill. Anti-hype Don’t talk so much. Rather, ship and sample. Release code, tell the right users. See what happens. Make changes. Extend the circle a bit by showing some other folks. Repeat. The more you hype coming features, the more you talk about what isn’t yet, the more you build unrealistic expectations. Your genius can’t possibly match our collective dreaming. Disappointment is inevitable. Yet, if you craft the right melody and make it simple for people to hum your tune, it will spread. Give it time. Listen. Speak the Language of the Tribe It’s respectful to speak in a human way. Not that you have to get all zOMGWTFBBQ!!1 in your messaging. People respond when you speak to them in a way that sounds natural. Natural will, of course, vary according to context. Again, listen in and people will signal the speech that works in that group for those tasks. Reveal those cues in your interface work and you’ll have powerful proof that actual people are working on your Web site. This example of Pownce‘s gender selector is the kind of thing I’m talking about: Now, this doesn’t mean you should mimic the lingo on some cool kidz site. Your service doesn’t need to have a massage when it’s down. Think about what works for you and your tribe. Excellent advice here from Feedburner’s Dick Costolo on finding a voice for your service. Also, Mule Design’s Erika Hall has an excellent talk on improving your word fu. Pass the mic, yo Here is a crazy idea: you could ask your users what they want. Maybe you could even use this input to figure out what they really want. Tools abound. Comments, wikis, forums, surveys. Embed the sexy new Get Satisfaction widget and have a dynamic FAQ running. The point is that you make it clear to people that they have a means of shaping the service with you. And you must showcase in some way that you are listening, evaluating and taking action against some of that user input. You get my drift. There are any number of ways we can show respect to those who gift us with their time, data, feedback, attention, evangelism, money. Big things are in the offing. I can feel the love already. 2007 Brian Oberkirch brianoberkirch 2007-12-23T00:00:00+00:00 https://24ways.org/2007/a-gift-idea-for-your-users-respect-yo/ ux
23 Animating Vectors with SVG It is almost 2014 and fifteen years ago the W3C started to develop a web-based scalable vector graphics (SVG) format. As web technologies go, this one is pretty old and well entrenched. See the Pen yJflC by Drew McLellan (@drewm) on CodePen Embed not working on your device? Try direct. Unlike rasterized images, SVG files will stay crisp and sharp at any resolution. With high-DPI phones, tablets and monitors, all those rasterized icons are starting to look a bit old and blocky. There are several options to get simpler, decorative pieces to render smoothly and respond to various device widths, shapes and sizes. Symbol fonts are one option; the other is SVG. I’m a big fan of SVG. SVG is an XML format, which means it is possible to write by hand or to script. The most common way to create an SVG file is through the use of various drawing applications like Illustrator, Inkscape or Sketch. All of them open and save the SVG format. But, if SVG is so great, why doesn’t it get more attention? The simple answer is that for a long time it wasn’t well supported, so no one touched the technology. SVG’s adoption has always been hampered by browser support, but that’s not the case any more. Every modern browser (at least three versions back) supports SVG. Even IE9. Although the browsers support SVG, it is implemented in many different ways. SVG in HTML Some browsers allow you to embed SVG right in the HTML: the <svg> element. Treating SVG as a first-class citizen works — sometimes. Another way to embed SVG is via the <img> element; using the src attribute, you can refer to an SVG file. Again, this only works sometimes and leaves you in a tight space if you need to have a fallback for older browsers. The most common solution is to use the <object> element, with the data attribute referencing the SVG file. When a browser does not support this, it falls back to the content inside the <object>. This could be a rasterized fallback <img>. This method gets you the best of both worlds: a nice vector image with an alternative rasterized image for browsers that don’t support SVG. The downside is that you need to manage both formats, and some browsers will download both the SVG and the rasterized version, becoming a performance problem. Alexey Ten came up with a brilliant little trick that uses inline SVG combined with an SVG  </design> ... So far, we have managed to build a pack of 100 Moo mini-cards with the same image on the front. If you wanted 100 different images, you just need to replicate this snippit, 99 more times. That describes the front design, but the flip-side of your mini-cards can contain 6 lines of text, which is customizable in a variety of colors, fonts and styles. The API allows you to create different text on the back of each mini-card, something the web interface doesn’t implement. To describe the text on the mini-card we need to add a <text_collection> element inside the <design> element. If you skip this element, the back of your mini-card will just be blank, but that’s not very festive! Inside the <text_collection> element, we need to describe the type of text we want to format, so we add a <minicard> element, which in turn contains all the lines of text. Each of Moo’s printed products take different numbers of lines of text, so if you are not planning on making mini-cards, be sure to consult the documentation. For mini-cards, we can have 6 distinct lines, each with their own style and layout. Each line is represented by an element <text_line> which has several optional children. The <id> tells which line of the 6 to print the text one. The <string> is the text you want to print and it must be shorter than 38 characters. The <bold> element is false by default, but if you want your text bolded, then add this and set it to true. The <align> element is also optional. By default it is set to align left. You can also set this to right or center if you desirer. The <font> element takes one of 3 types, modern, traditional or typewriter. The default is modern. Finally, you can set the <colour>, yes that’s color with a ‘u’, Moo is a British company, so they get to make the rules. When you start a print on demand company, you can spell it however you want. The <colour> element takes a 6 character hex value with a leading #. <design> ... <text_collection> <minicard> <text_line> <id>(1-6)</id> <string>String, I must be less than 38 chars!</string> <bold>true</bold> <align>left</align> <font>modern</font> <colour>#ff0000</colour> </text_line> </minicard> </text_collection> </design> If you combine all of this into a mini-card request you’d get this example: <?xml version="1.0" encoding="UTF-8"?> <moo xsi:noNamespaceSchemaLocation="http://www.moo.com/xsd/api_0.7.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <request> <version>0.7</version> <api_key>{YOUR API KEY HERE}</api_key> <call>build</call> <return_to>http://www.example.com/return.html</return_to> <fail_to>http://www.example.com/fail.html</fail_to> </request> <payload> <products> <product> <product_type>minicard</product_type> <designs> <design>  <text_collection> <minicard> <text_line> <id>1</id> <string>String, I must be less than 38 chars!</string> <bold>true</bold> <align>left</align> <font>modern</font> <colour>#ff0000</colour> </text_line> </minicard> </text_collection> </design> </designs> </product> </products> </payload> </moo> Now you know how to construct the XML that describes what to print. Next, you need to know how to send it to Moo to make it happen! Posting to the API So your XML is file ready to go. First thing we need to do is check it to make sure it’s valid. Moo has created a simple validator where you paste in your XML, and it alerts you to problems. When you have a fully valid XML file, you’ll want to send that to the Moo API. There are a few ways to do this, but the simplest is with an HTML form. This is the sample code for an HTML form with a big “Buy My Stickers” button. Once you know that it is working, you can use all your existing HTML knowledge to style it up any way you like. <form method="POST" action="http://www.moo.com/api/api.php"> <input type="hidden" name="xml" value="<?xml version="1.0" encoding="UTF-8"?> <moo xsi:noNamespaceSchemaLocation="http://www.moo.com/xsd/api_0.7.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <request>....</request> <payload>...</payload> </moo> "> <input type="submit" name="submit" value="Buy My Stickers"/> </form> This is just a basic <form> element that submits to the Moo API, http://www.moo.com/api/api.php, when someone clicks the button. There is a hidden input called “xml” which contains the value the XML file we created previously. For those of you who need to “view source” to fully understand what’s happening can see a working version and peek under the hood. Using the API has advantages over uploading the images directly yourself. The images and text that you send via the API can be dynamic. Some companies, like Dopplr, have taken user profiles and dynamic data that changes every minute to generate customer stickers of places that you’ve travelled to or mini-cards with a world map of all the cities you have visited. Every single customer has different travel plans and therefore different sets of stickers and mini-card maps. The API allows for the utmost current information to be printed, on demand, in real-time. Go forth and Moo’ltiply See, making an API call wasn’t that hard was it? You are now 90% of the way to creating anything with the Moo API. With a bit of reading, you can learn that extra 10% and print any Moo product. Be on the lookout in 2009 for the official release of the 1.0 API with improvements and some extras that were not available when this article was written. This article is released under the creative-commons attribution share-a-like license. That means you are free to re-distribute it, mash it up, translate it and otherwise re-using it ways the author never considered, in return he only asks you mention his name. This work by Brian Suda is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. 2008 Brian Suda briansuda 2008-12-19T00:00:00+00:00 https://24ways.org/2008/mooy-christmas/ code
160 Tracking Christmas Cheer with Google Charts A note from the editors: Since this article was written Google has retired the Charts API. Let’s get something out in the open: I love statistics. As an informatician I can’t get enough graphs, charts, and numbers. So you can imagine when Google released their Charts API I thought Christmas had come early. I immediately began to draw up graphs for the holiday season using the new API; and using my new found chart-making skills I’ll show you what you can and can’t do with Google Charts. Mummy, it’s my first chart The Google Charts API allows you to send data to Google; in return they give you back a nicely-rendered graph. All the hard work is done on Google’s servers — you need only reference an image in your HTML. You pass along the data — the numbers for the charts, axis labels, and so on — in the query string of the image’s URL. If you want to add charts to your blog or web site, there’s probably no quicker way to get started. Here’s a simple example: if we add the following line to an HTML page: <img src="http://chart.apis.google.com/chart?cht=lc&chs=200x125&chd=s:ZreelPuevfgznf2008" /> Then we’ll see the line graph in Figure 1 appear in our page. That graph is hosted on Google’s own server1: http://chart.apis.google.com/. Figure 1: A simple example of a line graph created with Google Charts. If you look at the URL used in the example you’ll notice we’re passing some parameters along in the query string (the bit after the question mark). The query string looks like this: cht=lc&chs=200x125&chd=s:ZreelPuevfgznf2008 It’s contains everything Google Charts needs to draw the graph. There are three parameters in the query string: cht; this specifies the type of chart Google Charts will generate (in this case, lc is a line chart). chs, the value of which is 200x125; this defines the chart’s size (200 pixels wide by 125 pixels high). chd, the value of which is s:ZreelPuevfgznf2008; this is the actual chart data, which we’ll discuss in more detail later. These three parameters are the minimum you need to send to Google Charts in order to create a chart. There are lots more parameters you can send too (giving you more choice over how a chart is displayed), but you have to include at least these three before a chart can be created. Using these three parameters you can create pie charts, scatter plots, Venn diagrams, bar charts (and more) up to 1,000 pixels wide or 1,000 pixels high (but no more than 300,000 pixels in total). Christmas pie After I discovered the option to create a pie chart I instantly thought of graphing all the types of food and beverages that I’ll consume at this year’s Christmas feast. I can represent each item as a percentage of all the food on a pie chart (just thinking about that makes me hungry). By changing the value of the cht parameter in the image’s query string I can change the chart type from a line chart to a pie chart. Google Charts offers two different types of pie chart: a fancy three-dimensional version and a two-dimensional overhead version. I want to stick with the latter, so I need to change cht=lc to cht=p (the p telling Google Charts to create a pie chart; if you want the three-dimensional version, use cht=p3). As a pie chart is circular I also need to adjust the size of the chart to make it square. Finally, it would be nice to add a title to the graph. I can do this by adding the optional parameter, chtt, to the end of the image URL. I end up with the chart you see in Figure 2. Figure 2: Pie chart with a title. To add this chart to your own page, you include the following (notice that you can’t include spaces in URLs, so you need to encode them as plus-signs.): <img src="http://chart.apis.google.com/chart?chtt=Food+and+Drink+Consumed+Christmas+2007&cht=p&chs=300x300&chd=s:ZreelPuevfgznf2008" /> Ok, that’s great, but there are still two things I want to do before I can call this pie chart complete. First I want to label each slice of the pie. And second I want to include the proper data (at the moment the slices are meaningless). If 2007 is anything like 2006, the break down will be roughly as follows: Egg nog 10% Christmas Ham 20% Milk (not including egg nog) 8% Cookies 25% Roasted Chestnuts 5% Chocolate 3% Various Other Beverages 15% Various Other Foods 9% Snacks 5% I have nine categories of food and drink to be tracked, so I need nine slice labels. To add these to the chart, I use the chl parameter. All nine labels are sent in one value; I use the vertical-pipe character, |, to separate them. So I need to append the following to the query string: chl=Egg+nog|Christmas+Ham|Milk+(not+including+egg+nog)|Cookies|Roast+Chestnuts|Chocolate|Various+Other+Beverages|Various+Other+Foods|Snacks Next I need to add the corresponding percentage values to the chart labels. Encoding the chart data is the trickiest part of the Google Charts API — but by no means complicated. There are three different ways to encode your data on a chart. As I’m only dealing with small numbers, I’m going to use what Google calls simple encoding. Simple encoding offers a sixty-two value spectrum in which to represent data. Remember the mandatory option, chd, from the first example? The value for this is split into two parts: the type of encoding and the graph data itself. These two parts are separated with a colon. To use simple encoding, the first character of the chd option must be a lower case s. Follow this with a colon and everything after it is considered data for the graph. In simple encoding, you have sixty-two values to represent your data. These values are lowercase and uppercase letters from the Latin alphabet (fifty-two characters in total) and the digits 0 to 9. Each letter of the alphabet represents a single number: A equals 0, B equals 1, and so on up to Z, which equals 25; a equals 26, b equals 27, and so on up to z, which equals 51. The ten digits represent the numbers 52 to 61: 0 equals 52, 1 equals 53, and 9 equals 61. In the previous two examples we used the string ZreelPuevfgznf2008 as our chart data; the Z is equal to 25, the r is equal to 42, the e is equal to 30, and so on. I want to encode the percentage values 10, 20, 8, 25, 5, 3, 15, 9 and 5, so in simple encoding I would use the string KUIZFDPJF. If you think figuring this out for each chart may make your head explode, don’t worry: help is out there. Do you remember I said I needed to change the image dimensions to be square, to accommodate the pie chart? Well now I’m including labels I need even more room. And as I’m in a Christmassy mood I’m going to add some festive colours too. The optional chco parameter is used to change the chart color. You set this using the same hexadecimal (“hex”) notation found in CSS. So let’s make our pie chart green by adding chco=00AF33 (don’t start it with a hash character as in CSS) to the image URL. If we only specify one hex colour for the pie chart Google Charts will use shades of that colour for each of the slices. To choose your own colours, pass a comma separated list of colours. The “Milk” and “Cookies” slices were consumed together, so we can make those two slices more of a redish colour. I’ll use shades of green for the other slices. My chco parameter now looks like this: chco=00AF33,4BB74C,EE2C2C,CC3232,33FF33,66FF66,9AFF9A,C1FFC1,CCFFCC. After all this, I’m left with the following URL: http://chart.apis.google.com/chart?chco=00AF33,4BB74C,EE2C2C,CC3232,33FF33,66FF66,9AFF9A,C1FFC1,CCFFCC&chl=Egg+nog|Christmas+Ham|Milk+(not+including+egg+nog)|Cookies|Roast+Chestnuts|Chocolate|Various+Other+Beverages|Various+Other+Foods|Snacks&chtt=Food+and+Drink+Consumed+Christmas+2007&cht=p&chs=600x300&chd=s:KUIZFDPJF What does that give us? I’m glad you asked. I have the rather beautiful 600-pixel wide pie chart you see in Figure 3. Figure 3: A Christmassy pie chart with labels. But I don’t like pie charts The pie chart was invented by the Scottish polymath William Playfair in 1801. But not everyone is as excited by pie charts as wee Billy, so if you’re an anti-pie-chartist, what can you do? You can easily reuse the same data but display it as a bar graph in a snap. The first thing we need to do is change the value of the cht parameter from p to bhg. This creates a horizontal bar graph (you can request a vertical bar graph using bvg). The data and labels all remain the same, but we need to decide where the labels will appear. I’ll talk more about how to do all this in the next section. In Figure 4 you’ll see the newly-converted bar graph. The URL for the graph is: http://chart.apis.google.com/chart?cht=bhg&chs=600x300&chd=s:KUIZFDPJF&chxt=x,y&chtt=Food+and+Drink+Consumed+Christmas+2007&chxl=1:|Egg+nog|Christmas+Ham|Milk+(not+including+egg+nog)|Cookies|Roast+Chestnuts|Chocolate|Various+Other+Beverages|Various+Other+Foods|Snacks&chco=00AF33 Figure 4: The pie chart from Figure 3 represented as a bar chart. Two lines, one graph Pie charts and bar charts are interesting, but what if I want to compare last year’s Christmas cheer with this year’s? That sounds like I’ll need two lines on one graph. The code is much the same as the previous examples; the most obvious difference is I need to set up the chart as a line graph. Creating some dummy values for the required parameters, I end up with: <img src="http://chart.apis.google.com/chart?chs=800x300&cht=lxy&chd=t:0,100|0,100" /> The chs=800x300 sets the dimensions of the new chart, while cht=lxy describes the type of chart we are using (in this case a line chart with x and y co-ordinates). For the chart data I’m going to demostrate a different encoding, text encoding. To use this I start the value of the chd parameter with “t:” instead of “s:”, and follow it with a list of x coordinates, a vertical pipe, |, and a list of y coordinates. Given the URL above, Google Charts will render the chart shown in Figure 5. Figure 5: A simple line graph with x and y co-ordinates. To make this graph a little more pleasing to the eye, I can add much the same as I did to the pie chart. I’ll add a chart title. Maybe something like “Projected Christmas Cheer for 2007”. Just as before I would add a chtt parameter to the image URL: &chtt=Projected+Christmas+Cheer+for+2007 Next, let’s add some labels on the y axis to represent a scale from 0 to 100. On the x axis let’s label for the most important days of December. To do this I need to use the chart axis type parameter, chxt. This allows us to specify the axes and associate some labels with them. As I’m only interested in the y-axis (to the left of the chart) and the x-axis (below the chart), we add chxt=x,y to our image URL. Now I need my label data. This is slightly more tricky because I want the data evenly spaced without labelling every item. The parameter for labels is chxl, the chart axis label. You match a label to an axis by using a number. So 0:Label1 is the zero index of chxt — in this case the x-axis. 1:Label2 is the first index of chxt — the y-axis. The order of these parameters or labels doesn’t matter as long as you associate them to their chxt correctly. The next thing to know about chxl is that you can add an empty label. Labels are separated by vertical pipe; if you don’t put any text in a label, you just leave the two vertical pipes empty (“||”) and Google Charts will allocate space but no label. For our vertical y axis, we want to label only 50% and 100% on the graph and plot them in their respective places. Since the y-axis is the second item, 1: (remember to start counting at zero), we add ten spaces to our image URL, chxl=1:||||||50|||||100 This will output the 50 halfway and the 100 at the top; all the other spaces will be empty. We can do the same thing to get specific dates along the x-axis as well. Let’s add the 1st of December, St. Nick’s Day (the 6th), Christmas Day, Boxing Day (a holiday common in the UK and the Commonwealth, on the 26th), and the final day of the month, the 31st. Since this is the x-axis I’ll use 0: as a reference in the chxt parameter tell Google Charts which axis to label. In full, the chxl parameter now looks like: chxl=1:||||||50|||||100|0:|Dec+1st|||||6th||||10th|||||15th|||||20th|||||25th|26th|||||Dec+31st That’s pretty. Before we begin to graph our data, I’ll do one last thing: add some grid lines to the chart so to better connect the data to the labels. The parameter for this is chg, short for chart grid lines. The parameter takes four comma-separated arguments. The first is the x-axis spacing for the grid. I have thirty-one days, so I need thirty vertical lines. The chart is 100% wide, so 3.33 (100 divided by 30) is the required spacing. As for the y-axis: the axis goes up to 100% but we probably only need to have a horizontal line every 10%, so the required spacing is 10 (100 divided by 10). That is the second argument. The last two arguments control the dash-style of the grid-lines. The first number is the length of the line dash and the second is the space between the dashes. So 6,3 would mean a six-unit dash with a three-unit space. I like a ratio of 1,3 but you can change this as you wish. Now that I have the four arguments, the chg parameter looks like: chg=3.333,10,1,3 If I add that to the chart URL I end up with: http://chart.apis.google.com/chart?chs=800x300&cht=lxy&chd=t:0,100|0,100&chtt=Projected+Christmas+Cheer+for+2007&chxt=x,y&chxl=0:|Dec+1st|||||6th|||||||||||||||||||25th|26th|||||Dec+31st|1:||||||50|||||100&chg=3.3333,10,1,3 Which results in the chart shown in Figure 6. Figure 6: Chart ready to receive the Christmas cheer values. Real data Now the chart is ready I can add historical data from 2006 and current data from 2007. Having a look at last year’s cheer levels we find some highs and lows through-out the month: Dec 1st Advent starts; life is good 30% Dec 6th St. Nick’s Day, awake to find good things in my shoes 45% Dec 8th Went Christmas carolling, nearly froze 20% Dec 10th Christmas party at work, very nice dinner 50% Dec 18th Panic Christmas shopping, hate rude people 15% Dec 23rd Off Work, home eating holiday food 80% Dec 25th Opened presents, good year, but got socks again from Grandma 60% Dec 26th Boxing Day; we’re off and no one knows why 70% Dec 28th Third day of left overs 40% Dec 29th Procured some fireworks for new years 55% Dec 31st New Year’s Eve 80% Since I’m plotting data for 2006 and 2007 on the same graph I’ll need two different colours — one for each year’s line — and a key to denote what each colour represents. The key is controlled by the chdl (chart data legend) parameter. Again, each part of the parameter is separated by a vertical pipe, so for two labels I’ll use chdl=2006|2007. I also want to colour-code them, so I’ll need to add the chco as I did for the pie chart. I want a red line and a green line, so I’ll use chco=458B00,CD2626 and add this to the image URL. Let’s begin to plot the 2006 data on the Chart, replacing our dummy data of chd=t:0,100|0,100 with the correct information. The chd works by first listing all the x coordinates (each separated by a comma), then a vertical pipe, and then all the y coordinates (also comma-separated). The chart is 100% wide, so I need to convert the days into a percentage of the month. The 1st of December is 0 and the 31st is 100. Everything else is somewhere in between. Our formula is: (d – 1) × 100 ÷ (31 – 1) Where d is the day of the month. The formula states that each day will be printed every 3.333 units; so the 6th of December will be printed at 16.665 units. I can repeat the process for the other dates listed to get the following x coordinates: 0,16.7,23.3,33.3,60,76.7,83.3,86.7,93.3,96.7. The y axis coordinates are easy because our scale is 100%, just like our rating, so we can simply copy them across as 30,45,20,50,15,80,60,70,40,55,80. This gives us a final chd value of: chd=t:0,16.7,23.3,33.3,60,76.7,83.3,86.7,93.3,96.7,100|30,45,20,50,15,80,60,70,40,55,80 Onto 2007: I can put the data for the month so far to see how we are trending. Dec 1st Christmas shopping finished already 50% Dec 4th Computer hard disk drive crashed (not Christmas related accident, but put me in a bad mood) 10% Dec 6th Missed St. Nick’s Day completely due to travelling 30% Dec 9th Dinner with friends before they travel 55% Dec 11th 24ways article goes live 60% Using the same system we did for 2006, I can take the five data points and plot them on the chart. The new x axis values will be 0,10,16.7,26.7 and the new y axis 50,10,30,65. We incorporate those into the image URL by appending these values onto the chd parameter we already have, which then becomes: chd=t:0,16.7,23.3,33.3,60,76.7,83.3,86.7,93.3,96.7,100|30,45,20,50,15,80,60,70,40,55,80|0,10,16.7,26.7,33.3|50,10,30,55,60 Passing this to Google Charts results in Figure 7. http://chart.apis.google.com/chart?chs=800x300&cht=lxy&chd=t:0,100|0,100&chtt=Projected+Christmas+Cheer+for+2007&chxt=x,y&chxl=0:|Dec+1st|||||6th|||||||||||||||||||25th|26th|||||Dec+31st|1:||||||50|||||100&chg=3.3333,10,1,3&chd=t:0,16.7,23.3,33.3,60,76.7,83.3,86.7,93.3,96.7,100|30,45,20,50,15,80,60,70,40,55,80|0,10,16.7,26.7,33.3|50,10,30,55,60&chco=458B00,CD2626&chdl=2006|2007 Figure 7: Projected Christmas cheer for 2006 and 2007. Did someone mention Edward Tufte? Google Charts are a robust set of chart types that you can create easily and freely using their API. As you can see, you can graph just about anything you want using the line graph, bar charts, scatter plots, venn diagrams and pie charts. One type of chart conspicuously missing from the API is sparklines. Sparklines were proposed by Edward Tufte as “small, high resolution graphics embedded in a context of words, numbers, images”. They can be extremely useful, but can you create them in Google Charts? The answer is: “Yes, but it’s an undocumented feature”. (The usual disclaimer about undocumented features applies.) If we take our original line graph example, and change the value of the cht parameter from lc (line chart) to lfi (financial line chart) the axis-lines are removed. This allows you to make a chart — a sparkline — small enough to fit into a sentence. Google uses the lfi type all throughout the their financial site, but it’s not yet part of the official API. MerryChristmas http://chart.apis.google.com/chart?cht=lfi&chs=100x15&chd=s:MerryChristmas 24ways http://chart.apis.google.com/chart?cht=lfi&chs=100x15&chd=s:24ways&chco=999999 HappyHolidays http://chart.apis.google.com/chart?cht=lfi&chs=100x15&chd=s:HappyHolidays&chco=ff0000 HappyNewYear http://chart.apis.google.com/chart?cht=lfi&chs=100x15&chd=s:HappyNewYear&chco=0000ff Summary The new Google Charts API is a powerful method for creating charts and graphs of all types. If you apply a little bit of technical skill you can create pie charts, bar graphs, and even sparklines as and when you need them. Now you’ve finished ready the article I hope you waste no more time: go forth and chart! Further reading Google Charts API More on Google Charts How to handle negative numbers 12 Days of Christmas Pie Chart 1 In order to remain within the 50,000 requests a day limit the Google Charts API imposes, chart images on this page have been cached and are being served from our own servers. But the URLs work – try them! 2007 Brian Suda briansuda 2007-12-11T00:00:00+00:00 https://24ways.org/2007/tracking-christmas-cheer-with-google-charts/ ux
223 Calculating Color Contrast Some websites and services allow you to customize your profile by uploading pictures, changing the background color or other aspects of the design. As a customer, this personalization turns a web app into your little nest where you store your data. As a designer, letting your customers have free rein over the layout and design is a scary prospect. So what happens to all the stock text and images that are designed to work on nice white backgrounds? Even the Mac only lets you choose between two colors for the OS, blue or graphite! Opening up the ability to customize your site’s color scheme can be a recipe for disaster unless you are flexible and understand how to find maximum color contrasts. In this article I will walk you through two simple equations to determine if you should be using white or black text depending on the color of the background. The equations are both easy to implement and produce similar results. It isn’t a matter of which is better, but more the fact that you are using one at all! That way, even with the craziest of Geocities color schemes that your customers choose, at least your text will still be readable. Let’s have a look at a range of various possible colors. Maybe these are pre-made color schemes, corporate colors, or plucked from an image. Now that we have these potential background colors and their hex values, we need to find out whether the corresponding text should be in white or black, based on which has a higher contrast, therefore affording the best readability. This can be done at runtime with JavaScript or in the back-end before the HTML is served up. There are two functions I want to compare. The first, I call ’50%’. It takes the hex value and compares it to the value halfway between pure black and pure white. If the hex value is less than half, meaning it is on the darker side of the spectrum, it returns white as the text color. If the result is greater than half, it’s on the lighter side of the spectrum and returns black as the text value. In PHP: function getContrast50($hexcolor){ return (hexdec($hexcolor) > 0xffffff/2) ? 'black':'white'; } In JavaScript: function getContrast50(hexcolor){ return (parseInt(hexcolor, 16) > 0xffffff/2) ? 'black':'white'; } It doesn’t get much simpler than that! The function converts the six-character hex color into an integer and compares that to one half the integer value of pure white. The function is easy to remember, but is naive when it comes to understanding how we perceive parts of the spectrum. Different wavelengths have greater or lesser impact on the contrast. The second equation is called ‘YIQ’ because it converts the RGB color space into YIQ, which takes into account the different impacts of its constituent parts. Again, the equation returns white or black and it’s also very easy to implement. In PHP: function getContrastYIQ($hexcolor){ $r = hexdec(substr($hexcolor,0,2)); $g = hexdec(substr($hexcolor,2,2)); $b = hexdec(substr($hexcolor,4,2)); $yiq = (($r*299)+($g*587)+($b*114))/1000; return ($yiq >= 128) ? 'black' : 'white'; } In JavaScript: function getContrastYIQ(hexcolor){ var r = parseInt(hexcolor.substr(0,2),16); var g = parseInt(hexcolor.substr(2,2),16); var b = parseInt(hexcolor.substr(4,2),16); var yiq = ((r*299)+(g*587)+(b*114))/1000; return (yiq >= 128) ? 'black' : 'white'; } You’ll notice first that we have broken down the hex value into separate RGB values. This is important because each of these channels is scaled in accordance to its visual impact. Once everything is scaled and normalized, it will be in a range between zero and 255. Much like the previous ’50%’ function, we now need to check if the input is above or below halfway. Depending on where that value is, we’ll return the corresponding highest contrasting color. That’s it: two simple contrast equations which work really well to determine the best readability. If you are interested in learning more, the W3C has a few documents about color contrast and how to determine if there is enough contrast between any two colors. This is important for accessibility to make sure there is enough contrast between your text and link colors and the background. There is also a great article by Kevin Hale on Particletree about his experience with choosing light or dark themes. To round it out, Jonathan Snook created a color contrast picker which allows you to play with RGB sliders to get values for YIQ, contrast and others. That way you can quickly fiddle with the knobs to find the right balance. Comparing results Let’s revisit our color schemes and see which text color is recommended for maximum contrast based on these two equations. If we use the simple ’50%’ contrast function, we can see that it recommends black against all the colors except the dark green and purple on the second row. In general, the equation feels the colors are light and that black is a better choice for the text. The more complex ‘YIQ’ function, with its weighted colors, has slightly different suggestions. White text is still recommended for the very dark colors, but there are some surprises. The red and pink values show white text rather than black. This equation takes into account the weight of the red value and determines that the hue is dark enough for white text to show the most contrast. As you can see, the two contrast algorithms agree most of the time. There are some instances where they conflict, but overall you can use the equation that you prefer. I don’t think it is a major issue if some edge-case colors get one contrast over another, they are still very readable. Now let’s look at some common colors and then see how the two functions compare. You can quickly see that they do pretty well across the whole spectrum. In the first few shades of grey, the white and black contrasts make sense, but as we test other colors in the spectrum, we do get some unexpected deviation. Pure red #FF0000 has a flip-flop. This is due to how the ‘YIQ’ function weights the RGB parts. While you might have a personal preference for one style over another, both are justifiable. In this second round of colors, we go deeper into the spectrum, off the beaten track. Again, most of the time the contrasting algorithms are in sync, but every once in a while they disagree. You can select which you prefer, neither of which is unreadable. Conclusion Contrast in color is important, especially if you cede all control and take a hands-off approach to the design. It is important to select smart defaults by making the contrast between colors as high as possible. This makes it easier for your customers to read, increases accessibility and is generally just easier on the eyes. Sure, there are plenty of other equations out there to determine contrast; what is most important is that you pick one and implement it into your system. So, go ahead and experiment with color in your design. You now know how easy it is to guarantee that your text will be the most readable in any circumstance. 2010 Brian Suda briansuda 2010-12-24T00:00:00+00:00 https://24ways.org/2010/calculating-color-contrast/ code
177 HTML5: Tool of Satan, or Yule of Santa? It would lead to unseasonal arguments to discuss the title of this piece here, and the arguments are as indigestible as the fourth turkey curry of the season, so we’ll restrict our article to the practical rather than the philosophical: what HTML5 can you reasonably expect to be able to use reliably cross-browser in the early months of 2010? The answer is that you can use more than you might think, due to the seasonal tinsel of feature-detection and using the sparkly pixie-dust of IE-only VML (but used in a way that won’t damage your Elf). Canvas canvas is a 2D drawing API that defines a blank area of the screen of arbitrary size, and allows you to draw on it using JavaScript. The pictures can be animated, such as in this canvas mashup of Wolfenstein 3D and Flickr. (The difference between canvas and SVG is that SVG uses vector graphics, so is infinitely scalable. It also keeps a DOM, whereas canvas is just pixels so you have to do all your own book-keeping yourself in JavaScript if you want to know where aliens are on screen, or do collision detection.) Previously, you needed to do this using Adobe Flash or Java applets, requiring plugins and potentially compromising keyboard accessibility. Canvas drawing is supported now in Opera, Safari, Chrome and Firefox. The reindeer in the corner is, of course, Internet Explorer, which currently has zero support for canvas (or SVG, come to that). Now, don’t pull a face like all you’ve found in your Yuletide stocking is a mouldy satsuma and a couple of nuts—that’s not the end of the story. Canvas was originally an Apple proprietary technology, and Internet Explorer had a similar one called Vector Markup Language which was submitted to the W3C for standardisation in 1998 but which, unlike canvas, was not blessed with retrospective standardisation. What you need, then, is some way for Internet Explorer to translate canvas to VML on-the-fly, while leaving the other, more standards-compliant browsers to use the HTML5. And such a way exists—it’s a JavaScript library called excanvas. It’s downloadable from http://code.google.com/p/explorercanvas/ and it’s simple to include it via a conditional comment in the head for IE: <!--[if IE]> <script src="excanvas.js"></script> <![endif]--> Simply include this, and your canvas will be natively supported in the modern browsers (and the library won’t even be downloaded) whereas IE will suddenly render your canvas using its own VML engine. Be sure, however, to check it carefully, as the IE JavaScript engine isn’t so fast and you’ll need to be sure that performance isn’t too degraded to use. Forms Since the beginning of the Web, developers have been coding forms, and then writing JavaScript to check whether an input is a correctly formed email address, URL, credit card number or conforms to some other pattern. The cumulative labour of the world’s developers over the last 15 years makes whizzing round in a sleigh and delivering presents seem like popping to the corner shop in comparison. With HTML5, that’s all about to change. As Yaili began to explore on Day 3, a host of new attributes to the input element provide built-in validation for email address formats (input type=email), URLs (input type=url), any pattern that can be expressed with a JavaScript-syntax regex (pattern="[0-9][A-Z]{3}") and the like. New attributes such as required, autofocus, input type=number min=3 max=50 remove much of the tedious JavaScript from form validation. Other, really exciting input types are available (see all input types). The datalist is reminiscent of a select box, but allows the user to enter their own text if they don’t want to choose one of the pre-defined options. input type=range is rendered as a slider, while input type=date pops up a date picker, all natively in the browser with no JavaScript required at all. Currently, support is most complete in an experimental implementation in Opera and a number of the new attributes in Webkit-based browsers. But don’t let that stop you! The clever thing about the specification of the new Web Forms is that all the new input types are attributes (rather than elements). input defaults to input type=text, so if a browser doesn’t understand a new HTML5 type, it gracefully degrades to a plain text input. So where does that leave validation in those browsers that don’t support Web Forms? The answer is that you don’t retire your pre-existing JavaScript validation just yet, but you leave it as a fallback after doing some feature detection. To detect whether (say) input type=email is supported, you make a new input type=email with JavaScript but don’t add it to the page. Then, you interrogate your new element to find out what its type attribute is. If it’s reported back as “email”, then the browser supports the new feature, so let it do its work and don’t bring in any JavaScript validation. If it’s reported back as “text”, it’s fallen back to the default, indicating that it’s not supported, so your code should branch to your old validation routines. Alternatively, use the small (7K) Modernizr library which will do this work for you and give you JavaScript booleans like Modernizr.inputtypes[email] set to true or false. So what does this buy you? Well, first and foremost, you’re future-proofing your code for that time when all browsers support these hugely useful additions to forms. Secondly, you buy a usability and accessibility win. Although it’s tempting to style the stuffing out of your form fields (which can, incidentally, lead to madness), whatever your branding people say, it’s better to leave forms as close to the browser defaults as possible. A browser’s slider and date pickers will be the same across different sites, making it much more comprehensible to users. And, by using native controls rather than faking sliders and date pickers with JavaScript, your forms are much more likely to be accessible to users of assistive technology. HTML5 DOCTYPE You can use the new DOCTYPE !doctype html now and – hey presto – you’re writing HTML5, as it’s pretty much a superset of HTML4. There are some useful advantages to doing this. The first is that the HTML5 validator (I use http://html5.validator.nu) also validates ARIA information, whereas the HTML4 validator doesn’t, as ARIA is a new spec developed after HTML4. (Actually, it’s more accurate to say that it doesn’t validate your ARIA attributes, but it doesn’t automatically report them as an error.) Another advantage is that HTML5 allows tabindex as a global attribute (that is, on any element). Although originally designed as an accessibility bolt-on, I ordinarily advise you don’t use it; a well-structured page should provide a logical tab order through links and form fields already. However, tabindex="-1" is a legal value in HTML5 as it allows for the element to be programmatically focussable by JavaScript. It’s also very useful for correcting a bug in Internet Explorer when used with a keyboard; in-page links go nowhere if the destination doesn’t have a proprietary property called hasLayout set or a tabindex of -1. So, whether it is the tool of Satan or yule of Santa, HTML5 is just around the corner. Some you can use now, and by the end of 2010 I predict you’ll be able to use a whole lot more as new browser versions are released. 2009 Bruce Lawson brucelawson 2009-12-05T00:00:00+00:00 https://24ways.org/2009/html5-tool-of-satan-or-yule-of-santa/ code
216 Styling Components - Typed CSS With Stylable There’s been a lot of debate recently about how best to style components for web apps so that styles don’t accidentally ‘leak’ out of the component they’re meant for, or clash with other styles on the page. Elaborate CSS conventions have sprung up, such as OOCSS, SMACSS, BEM, ITCSS, and ECSS. These work well, but they are methodologies, and require everyone in the team to know them and follow them, which can be a difficult undertaking across large or distributed teams. Others just give up on CSS and put all their styles in JavaScript. Now, I’m not bashing JS, especially so close to its 22nd birthday, but CSS-in-JS has problems of its own. Browsers have 20 years experience in optimising their CSS engines, so JavaScript won’t be as fast as using real CSS, and in any case, this requires waiting for JS to download, parse, execute then render the styles. There’s another problem with CSS-in-JS, too. Since Responsive Web Design hit the streets, most designers no longer make comps in Photoshop or its equivalents; instead, they write CSS. Why hire an expensive design professional and require them to learn a new way of doing their job? A recent thread on Twitter asked “What’s your biggest gripe with CSS-in-JS?”, and the replies were illuminating: “Always having to remember to camelCase properties then spending 10min pulling hair out when you do forget”, “the cryptic domain-specific languages that each of the frameworks do just ever so slightly differently”, “When I test look and feel in browser, then I copy paste from inspector, only to have to re-write it as a JSON object”, “Lack of linting, autocomplete, and css plug-ins for colors/ incrementing/ etc”. If you’re a developer, and you’re still unconvinced, I challenge you to let designers change the font in your IDE to Zapf Chancery and choose a new colour scheme, simply because they like it better. Does that sound like fun? Will that boost your productivity? Thought not. Some chums at Wix Engineering and I wanted to see if we could square this circle. Wix-hosted sites have always used CSS-in-JS (the concept isn’t new; it was in Netscape 4!) but that was causing performance problems. Could we somehow devise a method of extending CSS (like SASS and LESS do) that gives us styles that are guaranteed not to leak or clash, that is compatible with code editors’ autocompletion, and which could be pre-processed at build time to valid, cross-browser, static CSS? After a few months and a few proofs of concept later (drumroll), yes – we could! We call it Stylable. Introducing Stylable Stylable is a CSS pre-processor, like SASS or LESS. It uses CSS syntax so all your development tools will work. At build time, the Stylable CSS extensions are transpiled to flat, valid, cross-browser vanilla CSS for maximum performance. There’s quite a bit to it, and this is a short article, so let’s look at the basic concepts. Components all the way down Stylable is designed for component-based systems. Imagine you have a Gallery component. Within that, there is a Navigation component (for example, containing a ‘next’, ‘previous’, ‘show all thumbnails’, and ‘show all albums’ controls), and within that there are NavButton components. Each component is discrete, used elsewhere in the system in different contexts, perhaps maintained by different team members or even different organisations — you can use Stylable to add a typed interface to non-Stylable component libraries, as well as using it to build an app from scratch. Firstly, Stylable will automatically namespace styles so they only apply inside that component, by rewriting them at build time with a unique (but human-readable) prefix. So, for example, <div className="jingle bells" /> might be re-written as <div class="header183--jingle header183--bells"></div>. So far, so BEM-like (albeit without the headache of remembering a convention). But what else can it do? Custom pseudo-elements An important feature of Stylable is the ability to reach into a component and style it from the outside, without having to know about its internal structure. Let’s see the guts of a simple JSX button component in the file button.jsx: render () { return ( <button> <span className="icon" /> <span className="label">Submit</span> </button> ); } (Note:className is the JSX way of setting a class on an element; this example uses React, but Stylable itself is framework-agnostic.) I style it using a Stylable stylesheet (the .st.css suffix tells the preprocessor to process this file): /* button.st.css */ /* note that the root class is automatically placed on the root HTML element by Stylable React integration */ .root { background: #b0e0e6; } .icon { display: block; height: 2em; background-image: url('./assets/btnIcon.svg'); } .label { font-size: 1.2em; color: rgba(81, 12, 68, 1.0); } Note that Stylable allows all the CSS that you know and love to be included. As Drew Powers wrote in his review: with Stylable, you get CSS, and every part of CSS. This seems like a “duh” observation, but this is significant if you’ve ever battled with a CSS-in-JS framework over a lost or “hacky” implementation of a basic CSS feature. I can import my Button component into another component - this time, panel.jsx: /* panel.jsx */ import * as React from 'react'; import {properties, stylable} from 'wix-react-tools'; import {Button} from '../button'; import style from './panel.st.css'; export const Panel = stylable(style)(() => ( <div> <Button className="cancelBtn" /> </div> )); In panel.st.css: /* panel.st.css */ :import { -st-from: './button.st.css'; -st-default: Button; } /* cancelBtn is of type Button */ .cancelBtn { -st-extends: Button; background: cornflowerblue; } /* targets the label of <Button className="cancelBtn" /> */ .cancelBtn::label { color: honeydew; font-weight: bold; } Here, we’re reaching into the Button component from the Panel component. Buttons that are not inside a Panel won’t be affected. We do this by extending the CSS concept of pseudo-elements. As MDN says “A CSS pseudo-element is a keyword added to a selector that lets you style a specific part of the selected element(s)”. We don’t use a descendant selector because the label isn’t part of the Panel component, it’s part of the Button component. This syntax allows us three important features: Piercing the Shadow Boundary Because, like a Matroshka doll of code, you can have components inside components inside components, you can chain pseudo-elements. In Stylable, Gallery::NavigationPanel::Button::Icon is a legitimate selector. We were worried by this (even though all Stylable CSS is transpiled to flat, valid CSS) because it’s not allowed in CSS, albeit with the note “A future version of this specification may allow multiple pseudo-elements per selector”. So I asked the CSS Working Group and was told “we intend to only allow specific combinations”, so we feel this extension to CSS is in the spirit of the language. While we’re on the subject of those pesky Web Standards, note that the proposed ::part and ::theme pseudo-elements are meant to fulfil the same function. However, those are coming in two years (YouTube link) and, when they do, Stylable will support them. Structure-agnostic The second totez-groovy™ feature of Stylable’s pseudo-element syntax is that you don’t have to care about the internal structure of the component whose boundary you’re piercing. Any element with a class attribute is exposed as a pseudo-element to any component that imports it. It acts as an interface on any component, whether written in-house or by a third party. Code completion When we started writing Stylable, our objective was to do for CSS what TypeScript does for JavaScript. Wikipedia says Challenges with dealing with complex JavaScript code led to demand for custom tooling to ease developing of components in the language. TypeScript developers sought a solution that would not break compatibility with the standard and its cross-platform support … [with] static typing that enables static language analysis, which facilitates tooling and IDE support. Similarly, because Stylable knows about components, their stylable parts and states, and how they inter-relate, we can develop language services like code completion and validation. That means we can see our errors at build time or even while working in our IDE. Wave goodbye to silent run-time breakage misery, with the Stylable Intelligence VS Code extension ! An action replay of Visual Studio Code offering code completion etc, filmed in super StyloVision. Pseudo-classes for state Stylable makes it easy to apply styles to custom states (as well as the usual :active, :checked, :visited etc) by extending the CSS pseudo-class syntax. We do this by declaring the possible custom states on the component: /* Gallery.st.css */ .root { -st-states: toggled, loading; } .root:toggled { color: red; } .root:loading { color: green; } .root:loading:toggled { color: blue; } The -st-states “property” is actually a directive for the transpiler, so Stylable knows about possible pseudo-elements and can offer code completion etc. It looks like a vendor prefix by design, because it’s therefore valid CSS syntax and IDEs won’t flag it as an error, but is removed at build time. Remember, Stylable resolves to flat, valid, cross-browser CSS. As with plain CSS, it can’t set a state, but can only react to states set externally. In the case of custom pseudo-classes, your JavaScript logic is responsible for maintaining state — by default, by setting a data-* attribute. And there’s more! Hopefully, I’ve shown you how Stylable extends CSS to allow you to style components and sub-components without worrying about that styles will leak, or knowing too much about internal structure. There isn’t time to tell you about mixins (CSS macros in JavaScript), variables or our theming capabilities, because I have wine to wrap and presents to mull. We made Stylable because we ♥ CSS. But there’s a practical reason, too. As James Kyle, a core team member of Yarn, Babel and TC39 (the JavaScript Standards Technical Committee), said of Styable “pretty sure all the CSS-in-JS libraries just died for me”, explaining CSS could be perfectly static if given the right tools, that’s exactly what stylable does. It gives you the tools you need in CSS so that you don’t need to do a bunch of dynamic shit in JS. Making it static is a huge performance win. Wix is currently battle-testing Stylable in its back-office systems, before rolling it out to power Wix-hosted sites to make them more performant. There are 110 million Wix-hosted sites, so there will be a lot of Stylable on the web in a few months. And it’s open-sourced so you, dear Reader, can try it out and use it too. There’s a Stylable boilerplate based on create-react-app to get you started (more integrations are in the pipeline). Happy Hols ‘n’ Hugz from the Stylable team: Bruce, Arnon, Tom, Ido. Read more Stylable documentation centre Stylable on Twitter A nice picture of a hedgehog 2017 Bruce Lawson brucelawson 2017-12-09T00:00:00+00:00 https://24ways.org/2017/styling-components-typed-css-with-stylable/ code
32 Cohesive UX With Yosemite, Apple users can answer iPhone calls on their MacBooks. This is weird. And yet it’s representative of a greater trend toward cohesion. Shortly after upgrading to Yosemite, a call came in on my iPhone and my MacBook “rang” in parallel. And I was all, like, “Wut?” This was a new feature in Yosemite, and honestly it was a little bizarre at first. Apple promotional image showing a phone call ringing simultaneously on multiple devices. However, I had just spoken at a conference on the very topic you’re reading about now, and therefore I appreciated the underlying concept: the cohesion of user experience, the cohesion of screens. This is just one of many examples I’ve encountered since beginning to speak about this topic months ago. But before we get ahead of ourselves, let’s look back at the past few years, specifically the role of responsive web design. RWD != cohesive experience I needn’t expound on the virtues of responsive web design (RWD). You’ve likely already encountered more than a career’s worth on the topic. This is a good thing. Count me in as one of its biggest fans. However, if we are to sing the praises of RWD, we must also acknowledge its shortcomings. One of these is that RWD ends where the browser ends. For all its goodness, RWD really has no bearing on native apps or any other experiences that take place outside the browser. This makes it challenging, therefore, to create cohesion for multi-screen users if RWD is the only response to “let’s make it work everywhere.” We need something that incorporates the spirit of RWD while unifying all touchpoints for the entire user experience—single device or several devices, in browser or sans browser, native app or otherwise. I call this cohesive UX, and I believe it’s the next era of successful user experiences. Toward a unified whole Simply put, the goal of cohesive UX is to deliver a consistent, unified user experience regardless of where the experience begins, continues, and ends. Two facets are vital to cohesive UX: Function and form Data symmetry Let’s examine each of these. Function AND form Function over form, of course. Right? Not so fast, kiddo. Consider Bruce Lawson’s dad. After receiving an Android phone for Christmas and thumbing through his favorite sites, he was puzzled why some looked different from their counterparts on the desktop. “When a site looked radically different,” Bruce observed, “he’d check the URL bar to ensure that he’d typed in the right address. In short, he found RWD to be confusing and it meant he didn’t trust the site.” A lack of cohesive form led to a jarring experience for Bruce’s dad. Now, if I appear to be suggesting websites must look the same in every browser—you already learned they needn’t—know that I recognize the importance of context, especially in regards to mobile. I made a case for this more than seven years ago. Rather, cohesive UX suggests that form deserves the same respect as function when crafting user experiences that span multiple screens or devices. And users are increasingly comfortable traversing media. For example, more than 40% of adults in the U.S. owning more than one device start an activity on one screen and finish it on another, according to a study commissioned by Facebook. I suspect that percentage will only increase in 2015, and I suspect the tech-affluent readers of 24 ways are among the 40%. There are countless examples of cohesive form and function. Consider Gmail, which displays email conversations visually as a stack that can be expanded and collapsed like the bellows of an accordion. This visual metaphor has been consistent in virtually any instance of Gmail—website or app—since at least 2007 when I captured this screenshot on my Nokia 6680: Screenshot captured while authoring Mobile Web Design (2007). Back then we didn’t call this an app, but rather a ‘smart client’. When the holistic experience is cohesive as it is with Gmail, users’ mental models and even muscle memory are preserved.1 Functionality and aesthetics align with the expectations users have for how things should function and what they should look like. In other words, the experience is roughly the same across screens. But don’t be ridiculous, peoples. Note that I said “roughly.” It’s important to avoid mindless replication of aesthetics and functionality for the sake of cohesion. Again, the goal is a unified whole, not a carbon copy. Affordances and concessions should be made as context and intuition require. For example, while Facebook users are accustomed to top-aligned navigation in the browser, they encounter bottom-aligned navigation in the iOS app as justified by user testing: The iOS app model has held up despite many attempts to better it: http://t.co/rSMSAqeh9m pic.twitter.com/mBp36lAEgc— Luke Wroblewski (@lukew) December 10, 2014 Despite the (rather minor) lack of consistency in navigation placement, other elements such as icons, labels, and color theme work in tandem to produce a unified, holistic whole. Data symmetry Data symmetry involves the repetition, continuity, or synchronicity of data across screens, devices, and platforms. As regards cohesive UX, data includes not just the material (such as an article you’re writing on Medium) but also the actions that can be performed on or with that material (such as Medium’s authoring tools). That is to say, “sync verbs, not just nouns” (Josh Clark). In my estimation, Amazon is an archetype of data symmetry, as is Rdio. When logged in, data is shared across virtually any device of any kind, irrespective of using a browser or native app. Add a product to your Amazon cart from your phone during the morning commute, and finish the transaction at work on your laptop. Easy peasy. Amazon’s aesthetics are crazy cohesive, to boot: Amazon web (left) and native app (right). With Rdio, not only are playlists and listening history synced across screens as you would expect, but the cohesion goes even further. Rdio’s remote control feature allows you to control music playing on one device using another device, all in real time. Rdio’s remote control feature, as viewed on my MacBook while music plays on my iMac. At my office I often work from my couch using my MacBook, but my speakers are connected to my iMac. When signed in to Rdio on both devices, my MacBook serves as proxy for controlling Rdio on my iMac, much the same as any Yosemite-enabled device can serve as proxy for an incoming iPhone call. Me, in my office. Note the iMac and speakers at far right. This is a brilliant example of cohesive design, and it’s executed entirely via the cloud. Things to consider Consider the following when crafting cohesive experiences: Inventory the elements that comprise your product experience, and cohesify them.2 Consider things such as copy, tone, typography, iconography, imagery, flow, placement, brand identification, account data, session data, user preferences, and so on. Then, create cohesion among these elements to the greatest extent possible, while adapting to context as needed. Store session data in the cloud rather than locally. For example, avoid using browser cookies to store shopping cart data, as cookies are specific to a single browser on a single device. Instead, store this data in the cloud so it can be accessed from other devices, as well as beyond the browser. Consider using web views when developing your native app. “You’re already using web apps in native wrappers without even noticing it,” Lukas Mathis contends. “The fact that nobody even notices, the fact that this isn’t a story, shows that, when it comes to user experience, web vs. native doesn’t matter anymore.” Web views essentially allow you to display HTML content inside a native wrapper. This can reduce the time and effort needed to make the overall experience cohesive. So whereas the navigation bar may be rendered by the app, for example, the remaining page display may be rendered via the web. There’s readily accessible documentation for using web views in C++, iOS, Android, and so forth. Nature is calling Returning to the example of Yosemite and sychronized phone calls, is it really that bizarre in light of cohesive UX? Perhaps at first. But I suspect that, over time, Yosemite’s cohesiveness — and the cohesiveness of other examples like the ones we’ve discussed here — will become not only more natural but more commonplace, too. 1 I browse Flipboard on my iPad nearly every morning as part of my breakfast routine. Swiping horizontally advances to the next page. Countless times I’ve done the same gesture in Flipboard for iPhone only to have it do nothing. This is because the gesture for advancing is vertical on phones. I’m so conditioned to the horizontal swipe that I often fail to make the switch to vertical swipe, and apparently others suffer from the same muscle memory, too. 2 Cohesify isn’t a thing. But chances are you understood what I meant. Yay neologism! 2014 Cameron Moll cameronmoll 2014-12-24T00:00:00+00:00 https://24ways.org/2014/cohesive-ux/ ux
144 The Mobile Web, Simplified A note from the editors: although eye-opening in 2006, this article is no longer relevant to today’s mobile web. Considering a foray into mobile web development? Following are four things you need to know before making the leap. 1. 4 billion mobile subscribers expected by 2010 Fancy that. Coupled with the UN prediction of 6.8 billion humans by 2010, 4 billion mobile subscribers (source) is an astounding 59% of the planet. Just how many of those subscribers will have data plans and web-enabled phones is still in question, but inevitably this all means one thing for you and me: A ton of potential eyes to view our web content on a mobile device. 2. Context is king Your content is of little value to users if it ignores the context in which it is viewed. Consider how you access data on your mobile device. You might be holding a bottle of water or gripping a handle on the subway/tube. You’re probably seeking specific data such as directions or show times, rather than the plethora of data at your disposal via a desktop PC. The mobile web, a phrase often used to indicate “accessing the web on a mobile device”, is very much a context-, content-, and component-specific environment. Expressed in terms of your potential target audience, access to web content on a mobile device is largely influenced by surrounding circumstances and conditions, information relevant to being mobile, and the feature set of the device being used. Ask yourself, What is relevant to my users and the tasks, problems, and needs they may encounter while being mobile? Answer that question and you’ll be off to a great start. 3. WAP 2.0 is an XHTML environment In a nutshell, here are a few fundamental tenets of mobile internet technology: Wireless Application Protocol (WAP) is the protocol for enabling mobile access to internet content. Wireless Markup Language (WML) was the language of choice for WAP 1.0. Nearly all devices sold today are WAP 2.0 devices. With the introduction of WAP 2.0, XHTML Mobile Profile (XHTML-MP) became the preferred markup language. XHTML-MP will be familiar to anyone experienced with XHTML Transitional or Strict. Summary? The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it. With WML on the decline, the learning curve is much smaller today than it was several years ago. I’m generalizing things gratuitously, but the point remains: Get off yo’ lazy butt and begin to take mobile seriously. I’ll even pass you a few tips for getting started. First, the DOCTYPE for XHTML-MP is as follows: <!DOCTYPE html PUBLIC "-//WAPFORUM//DTD XHTML Mobile 1.0//EN" "http://www.openmobilealliance.org/tech/DTD/xhtml-mobile10.dtd"> As for MIME type, Open Mobile Alliance (OMA) specifies using the MIME type application/vnd.wap.xhtml+xml, but ultimately you need to ensure the server delivering your mobile content is configured properly for the MIME type you choose to use, as there are other options (see Setting up WAP Servers). Once you’ve made it to the body, the XHTML-MP markup is not unlike what you’re already used to. A few resources worth skimming: Developers Home XHTML-MP Tutorial – An impressively replete resource for all things XHTML-MP XHTML-MP Tags List – A complete list of XHTML-MP elements and accompanying attributes And last but certainly not least, CSS. There exists WAP CSS, which is essentially a subset of CSS2 with WAP-specific extensions. For all intents and purposes, much of the CSS you’re already comfortable using will be transferrable to mobile. As for including CSS in your pages, your options are the same as for desktop sites: external, embedded, and inline. Some experts will argue embedded or inline over external in favor of reducing the number of HTTP connections per page request, yet many popular mobilized sites and apps employ external linking without issue. Stocking stuffers: Flickr Mobile, Fandango Mobile, and Popurls Mobile. A few sites with whom you can do the View Source song and dance for further study. 4. “Cell phone” is so DynaTAC If you’re a U.S. resident, listen up: You must rid your vocabulary of the term “cell phone”. We’re one of the few economies on the planet to refer to a mobile phone accordingly. If you care to find yourself in any of the worthwhile mobile development circles, begin using terms more widely accepted: “mobile” or “mobile phone” or “handset” or “handy”. If you’re not sure which, go for “mobile”. Such as, “Yo dog, check out my new mobile.” More importantly, however, is overcoming the mentality that access to the mobile web can be done only with a phone. Instead, “device” encourages us to think phone, handheld computer, watch, Nintendo DS, car, you name it. Simple enough? 2006 Cameron Moll cameronmoll 2006-12-19T00:00:00+00:00 https://24ways.org/2006/the-mobile-web-simplified/ ux
40 Don’t Push Through the Pain In 2004, I lost my web career. In a single day, it was gone. I was in too much pain to use a keyboard, a Wacom tablet (I couldn’t even click the pen), or a trackball. Switching my mouse to use my left (non-dominant) hand only helped a bit; then that hand went, too. I tried all the easy-to-find equipment out there, except for expensive gizmos with foot pedals. I had tingling in my fingers—which, when I was away from the computer, would rhythmically move as if some other being controlled them. I worried about Parkinson’s because the movements were so dramatic. Pen on paper was painful. Finally, I discovered one day that I couldn’t even turn a doorknob. The only highlight was that I couldn’t dust, scrub, or vacuum. We were forced to hire someone to come in once a week for an hour to whip through the house. You can imagine my disappointment. My injuries had gradually slithered into my life without notice. I’d occasionally have sore elbows, or my wrist might ache for a day, or my shoulders feel tight. But nothing to keyboard home about. That’s the critical bit of news. One day, you’re pretty fine. The next day, you don’t have your job—or any job that requires the use of your hands and wrists. I had to walk away from the computer for over four months—and partially for several months more. That’s right: no income. If I hadn’t found a gifted massage therapist, the right book of stretches, the equipment I should have been using all along, and learned how to pay attention to my body—even just a little bit more—I quite possibly wouldn’t be writing this article today. I wouldn’t be writing anything, anywhere. Most of us have heard of (and even claimed to have read all of) Mihaly Csikszentmihalyi, author of Flow: The Psychology of Optimal Experience, who describes the state of flow—the place our minds go when we are fully engaged and in our element. This lovely state of highly focused activity is deeply satisfying, often creative, and quite familiar to many of us on the web who just can’t quit until the copy sings or the code is untangled or we get our highest score yet in Angry Birds. Our minds may enter that flow, but too often as our brains take flight, all else recedes. And we leave something very important behind. Our bodies. My body wasn’t made to make the same minute movements thousands of times a day, most days of the year, for decades, and neither was yours. The wear and tear sneaks up on you, especially if you’re the obsessive perfectionist that we all pretend not to be. Oh? You’re not obsessed? I wasn’t like this all the time, but I remember sitting across from my husband, eating dinner, and I didn’t hear a word he said. I’d left my brain upstairs in my office, where it was wrestling in a death match with the box model or, God help us all, IE 5.2. I was a writer, too, and I was having my first inkling that I was a content strategist. Work was exciting. I could sit up late, in the flow, fingers flying at warp speed. I could sit until those wretched birds outside mocked me with their damn, cheerful “Hurray, it’s morning!” songs. Suddenly, while, say, washing dishes, the one magical phrase that captured the essence of a voice or idea would pop up, and I would have mowed down small animals and toddlers to get to my computer and hammer out that website or article, to capture that thought before it escaped. Note my use of the word hammer. Sound at all familiar? But where was my body during my work? Jaw jutting forward to see the screen, feet oddly positioned—and then left in place like chunks of marble—back unsupported, fingers pounding the keys, wrists and arms permanently twisted in unnatural angles that we thought were natural. And clicking. Clicking, clicking, clicking that mouse. Thumbing tiny keyboards on phones. A lethal little gesture for tiny little tendons. Though I was fine from, say 1997 to 2004, by the end of 2004 this behavior culminated in disaster. I had repetitive stress injuries, aka repetitive motion injuries. As the Apple site says, “A brief exposure to these conditions would not cause harm. But a prolonged exposure may, in some people, result in reduced ability to function.” I’ll say. I frantically turned to people on lists and forums. “Try a track ball.” Already did that. “Try a tablet.” Worse. One person wrote, “I still come here once in a while and can type a couple sentences, but I’ve permanently got thoracic outlet syndrome and I’ll never work again.” Oh, beauteous web, oh, long-distance friends, farewell. The Wrist Bone’s Connected to the Brain Bone That variation on the old song tells part of the story. Most people (and many of their physicians) believe that tingling fingers and aching wrists MUST be carpel tunnel syndrome. Nope. If your neck juts forward, it tenses and stays tense the entire time you work in that position. Remember how your muscles felt after holding a landline phone with your neck tilted to one side for a long client meeting? Regrettable. Tensing your shoulders because your chair’s not designed properly puts you at risk for thoracic outlet syndrome, a career-killer if ever there was one. The nerves and tendons in your neck and shoulder refer down your arms, and muscles swell around nerves, causing pain and dysfunction. Your elbows have a tendon that is especially vulnerable to repetitive movements (think tennis elbow). Your wrists are performing something akin to a circus act with one thousand shows a day. So, all the fine tendons and ligaments in your fingers have problems that may not start at your wrists at all. Though some people truly do have carpal tunnel syndrome, my finger and wrist problems weren’t solved by heavily massaging my fingers (though, that was helpful, too) or my wrists. They were fixed by work on my neck, upper back, shoulders, arms, and elbows. This explains why many people have surgery for carpal tunnel syndrome and just months later say, “What?! How can I possibly have it again? I had an operation!” Well, fellow buckaroo, you may never have had carpel tunnel syndrome. You may have had—or perhaps will have—one long disaster area from your neck to your fingertips. How to Crawl Back Before trying extreme measures, you may be able to function again even if you feel hopeless. I managed to heal, and so have others, but I’ll always be at risk. As Jen Simmons, of The Web Ahead podcast and other projects told me, “It took a long time to injure myself. It took a long time to get back to where I was. My right arm between my elbow and wrist would start aching intermittently. Eventually, my arm even ached at night. I started each day with yesterday’s pain.” Simple measures, used consistently, helped her back. 1. Massage therapy I don’t remember what the rest of the world is like, but in Portland, Oregon, we have more than one massage therapy college. (Of course we do.) I saw a former teacher at the most respected school. This is not your “It was all so soothing. Why, I fell asleep!” massage. This is “Holy crap, he’s grinding his elbow into my armpit!“ massage therapy, with the emphasis on therapy. I owe him everything. Make sure you have someone who really knows what they’re doing. Get many referrals. Try a question, “Does my psoas muscle affect my back?” If they can’t answer it, flee. Regularly see the one you choose and after a while, depending on how injured you are, you may be able to taper off. 2. Change your equipment You may need to be hands-on with several pieces of equipment before you find the ones that don’t cause more pain. Many companies have restocking fees, charges to ship the equipment you want to return, and other retail atrocities. Always be sure to ask what the return policies are at any company before purchasing. Mice You may have more success than I did with equipment such as the Wacom tablet. Mine came with a pen, and it hurt to repetitively click it. Trackballs are another option but, for many, they are better at prevention than recovery. But let’s get to the really effective stuff. One of the biggest sources of pain is using your mouse. One major reason is that your hand and wrist are in a perpetually unnatural position and you’re also moving your arm quite a bit. Each time you move the mouse, it is placing stress on your neck, shoulders and arms, because you need to lift them slightly in order to move the mouse and you need to angle your wrist. You may also be too injured to use the trackpad all the time, and this mouse, the vertical mouse is a dandy preventative measure, too. Shaking up your patterns is a wise move. I have long fingers, not especially thin, yet the small size works best for me. (They have larger choices available.) What?! A sideways mouse? Yep. All the weight of your hand will be resting on it in the handshake position. Your forearms aren’t constantly twisting over hill and dale. You aren’t using any muscles in your wrist or hand. They are relaxing. You’ll adapt in a day, and oh, oh, what a relief it is. Keyboards I really liked doing business with the people at Kinesis-Ergo. (I’m not affiliated with them in any way.) They have the vertical mouse and a number of keyboards. The one that felt the most natural to me, and, once again, it only takes a day to adapt, is the Freestyle2 for the Mac. They have several options. I kept the keyboard halves attached to each other at first, and then spread them apart a little more. I recommend choosing one that slants and can separate. You can adjust the angle. For a little extra, they’ll make sure it’s all set up and ready to go for you. I’m guessing that some Googling will find you similar equipment, wherever you live. Warning: if you use the ergonomic keyboards, you may have fewer USB ports. The laptop will be too far away to see unless you find a satisfactory setup using a stand. This is the perfect excuse for purchasing a humongous display. You may not look cool while jetting coast to coast in your skinny jeans and what appears to be the old-time orthopedic shoe version of computing gear. But once you have rested and used many of these suggestions consistently, you may be able to use your laptop or other device in all its lovely sleekness during the trip. Other doohickies The Kinesis site and The Human Solution have a wide selection of ergonomic products: standing desks, ergonomically correct chairs, and, yes, even things with foot pedals. Explore! 3. Stop clicking, at least for a while Use keyboard shortcuts, but use them slowly. This is not the time to show off your skillz. You’ll be sort of like a recovering alcoholic, in that you’ll be a recovering repetitive stress survivor for the rest of your life, once you really injure yourself. Always be vigilant. There’s also a bit of software sold by The Human Solution and other places, and it was my salvation. It’s called the McNib for Macs, and the Nib for PCs. (I’ve only used the McNib.) It’s for click-free mousing. I found it tricky to use when writing markup and code, but you may become quite adept at it. A little rectangle pops up on your screen, you mouse over it and choose, let’s say, “Double-click.” Until you change that choice, if you mouse over a link or anything else, it will double-click it for you. All you do is glide your mouse around. Awkward for a day or two, but you’ll pick it up quickly. Though you can use it all day for work, even if you just use this for browsing LOLcats or Gary Vaynerchuk’s YouTube videos, it will help you by giving your fingers a sweet break. But here’s the sad news. The developer who invented this died a few years ago. (Yes, I used to speak to him on the phone.) While it is for sale, it isn’t compatible with Mac OS X Lion or anything subsequent. PowerPC strikes again. His site is still up. Demos for use with older software can be downloaded free at his old site, or at The Human Solution. Perhaps an enterprising developer can invent something that would provide this help, without interfering with patents. Rumor has it among ergonomic retailers (yes, I’m like a police dog sniffing my way to a criminal once I head down a trail) that his company was purchased by a company in China, with no update in sight. 4. Use built-in features That little microphone icon that comes up alongside the keyboard on your iPhone allows you to speak your message instead of incessantly thumbing it. I believe it works in any program that uses the keyboard. It’s not Siri. She’s for other things, like having a personal relationship with an inanimate object. Apple even has a good section on ergonomics. You think I’m intense about this subject? To improve your repetitive stress, Apple doesn’t want you to use oral contraceptives, alcohol, or tobacco, to which I say, “Have as much sex, bacon, and chocolate as possible to make up for it.” Apple’s info even has illustrations of things like a faucet dripping into what is labeled a bucket full of “TRAUMA.” Sounds like upgrading to Yosemite, but I digress. 5. Take breaks If it’s a game or other non-essential activity, take a break for a month. Fine, now that I’ve called games non-essential, I suppose you’ll all unfollow me on Twitter. 6. Whether you are sore or not, do stretches throughout the day This is a big one. Really big. The best book on the subject of repetitive stress injuries is Conquering Carpal Tunnel Syndrome and Other Repetitive Strain Injuries: A Self-Care Program by Sharon J. Butler. Don’t worry, most of it is illustrations. Pretend it’s a graphic novel. I’m notorious for never reading instructions, and who on earth reads the introduction of a book, unless they wrote it? I wrote a book a long time ago, and I bet my house, husband, and life savings that my own parents never read the intro. Well, I did read the intro to this book, and you should, too. Stretching correctly, in a way that doesn’t further hurt you, that keeps you flexible if you aren’t injured, that actually heals you, calls for precision. Read and you’ll see. The key is to stretch just until you start to feel the stretch, even if that’s merely a tiny movement. Don’t force anything past that point. Kindly nurse yourself back to health, or nurture your still-healthy body by stretching. Over the following days, weeks, months, you’ll be moving well past that initial stretch point. The book is brimming with examples. You only have to pick a few stretches, if this is too much to handle. Do it every single day. I can tell you some of the best ones for me, but it depends on the person. You’ll also discover in Butler’s book that areas that you think are the problem are sometimes actually adjacent to the muscle or tendon that is the source of the problem. Add a few stretches or two for that area, too. But please follow the instructions in the introduction. If you overdo it, or perform some other crazy-ass hijinks, as I would be tempted to do, I am not responsible for your outcome. I give you fair warning that I am not a healthcare provider. I’m just telling you as a friend, an untrained one, at that, who has been through this experience. 7. Follow good habits Develop habits like drinking lots of water (which helps with lactic acid buildup in muscles), looking away from the computer for twenty seconds every twenty to thirty minutes, eating right, and probably doing everything else your mother told you to do. Maybe this is a good time to bring up flossing your teeth, and going outside to play instead of watching TV. As your mom would say, “It’s a beautiful day outside, what are you kids doing in here?” 8. Speak instead of writing, if you can Amber Simmons, who is very smart and funny, once tweeted in front of the whole world that, “@carywood is a Skype whore.” I was always asking people on Twitter if we could Skype instead of using iChat or exchanging emails. (I prefer the audio version so I don’t have to, you know, do something drastic like comb my hair.) Keyboarding is tough on hands, whether you notice it or not at the time, and when doing rapid-fire back-and-forthing with people, you tend to speed up your typing and not take any breaks. This is a hand-killer. Voice chats have made such a difference for me that I am still a rabid Skype whore. Wait, did I say that out loud? Speak your text or emails, using Dragon Dictate or other software. In about 2005, accessibility and user experience design expert, Derek Featherstone, in Canada, and I, at home, chatted over the internet, each of us using a different voice-to-text program. The programs made so many mistakes communicating with each other that we began that sort of endless, tearful laughing that makes you think someone may need to call an ambulance. This type of software has improved quite a bit over the years, thank goodness. Lack of accessibility of any kind isn’t funny to Derek or me or to anyone who can’t use the web without pain. 9. Watch your position For example, if you lift up your arms to use the computer, or stare down at your laptop, you’ll need to rearrange your equipment. The internet has a lot of information about ideal ergonomic work areas. Please use a keyboard drawer. Be sure to measure the height carefully so that even a tented keyboard, like the one I recommend, will fit. I also recommend getting the version of the Freestyle with palm supports. Just these two measures did much to help both Jen Simmons and me. 10. If you need to take anti-inflammatories, stop working If you are all drugged up on ibuprofen, and pounding and clicking like mad, your body will not know when you are tired or injuring yourself. I don’t recommend taking these while using your computing devices. Perhaps just take it at night, though I’m not a fan of that category of medications. Check with your healthcare provider. At least ibuprofen is an anti-inflammatory, which may help you. In contrast, acetaminophen (paracetamol) only makes your body think it’s not in pain. Ice is great, as is switching back and forth between ice and heat. But again, if you need ice and ibuprofen you really need to take a major break. 11. Don’t forget the rest of your body I’ve zeroed in on my personal area of knowledge and experience, but you may be setting yourself up for problems in other areas of your body. There’s what is known to bad writers as “a veritable cornucopia” of information on the web about how to help the rest of your body. A wee bit of research on the web and you’ll discover simple exercises and stretches for the rest of your potential catastrophic areas: your upper back, your lower back, your legs, ankles, and eyes. Do gentle stretches, three or four times a day, rather than powering your way through. Ease into new equipment such as standing desks. Stretch those newly challenged areas until your body adapts. Pay attention to your body, even though I too often forget mine. 12. Remember the children Kids are using equipment to play highly addictive games or to explore amazing software, and if these call for repetitive motions, children are being set up for future injuries. They’ll grab hold of something, as parents out there know, and play it 3,742 times. That afternoon. Perhaps by the time they are adults, everything will just be holograms and mind-reading, but adult fingers and hands are used for most things in life, not just computing devices and phones with keyboards sized for baby chipmunks. I’ll be watching you Quickly now, while I (possibly) have your attention. Don’t move a muscle. Is your neck tense? Are you unconsciously lifting your shoulders up? How long since you stopped staring at the screen? How bright is your screen? Are you slumping (c’mon now, ‘fess up) and inviting sciatica problems? Do you have to turn your hands at an angle relative to your wrist in order to type? Uh-oh. That’s a bad one. Your hands, wrists, and forearms should be one straight line while keyboarding. Future you is begging you to change your ways. Don’t let your #ThrowbackThursday in 2020 say, “Here’s a photo from when I used to be able to do so many wonderful things that I can’t do now.” And, whatever you do, don’t try for even a nanosecond to push through the pain, or the next thing you know, you’ll be an unpaid extra in The Expendables 7. 2014 Carolyn Wood carolynwood 2014-12-06T00:00:00+00:00 https://24ways.org/2014/dont-push-through-the-pain/ business
228 The Great Unveiling The moment of unveiling our designs should be among our proudest, but it never seems to work out that way. Instead of a chance to show how we can bring our clients’ visions to life, critique can be a tense, worrying ordeal. And yes, the stakes are high: a superb design is only superb if it goes live. Mismanage the feedback process and your research, creativity and hard work can be wasted, and your client may wonder whether you’ve been worth the investment. The great unveiling is a pivotal part of the design process, but it needn’t be a negative one. Just as usability testing teaches us whether our designs meet user needs, presenting our work to clients tells us whether we’ve met important business goals. So how can we turn the tide to make presenting designs a constructive experience, and to give good designs a chance to shine through? Timing is everything First, consider when you should seek others’ opinions. Your personal style will influence whether you show early sketches or wait to demonstrate something more complete. Some designers thrive at low fidelity, sketching out ideas that, despite their rudimentary nature, easily spark debate. Other designers take time to create more fully-realised versions. Some even argue that the great unveiling should be eliminated altogether by working directly alongside the client throughout, collaborating on the design to reach its full potential. Whatever your individual preference, you’ll rarely have the chance to do it entirely your own way. Contracts, clients, and deadlines will affect how early and often you share your work. However, try to avoid the trap of presenting too late and at too high fidelity. My experience has taught me that skilled designers tend to present their work earlier and allow longer for iteration than novices do. More aware of the potential flaws in their solutions, these designers cling less tightly to their initial efforts. Working roughly and seeking early feedback gives you the flexibility to respond more fully to nuances you may have missed until now. Planning design reviews Present design ideas face-to-face, or at least via video conference. Asynchronous methods like e-mail and Basecamp are slow, easily ignored, and deny you the opportunity to guide your colleagues through your work. In person, you profit from both the well-known benefits of non-verbal communication, and the chance to immediately respond to questions and elaborate on rationale. Be sure to watch the numbers at your design review sessions, however. Any more than a handful of attendees and the meeting could quickly spiral into fruitless debate. Ask your project sponsor to appoint a representative to speak on behalf of each business function, rather than inviting too many cooks. Where possible, show your work in its native format. Photocopy hand-drawn sketches to reinforce their disposability (the defining quality of a sketch) and encourage others to scribble their own thoughts on top. Show digital deliverables – wireframes, design concepts, rich interactions – on screen. The experience of a design is very different on screen than on paper. A monitor has appropriate dimensions and viewport size, presenting an accurate picture of the design’s visual hierarchy, and putting interactive elements in the right context. On paper, a link is merely underlined text. On screen, it is another step along the user’s journey. Don’t waste time presenting multiple concepts. Not only is it costly to work up multiple concepts to the level required for fair appraisal, but the practice demonstrates a sorry abdication of responsibility. Designers should be custodians of design. Asking for feedback on multiple designs turns the critique process into a beauty pageant, relinquishing a designer’s authority. Instead of rational choices that meet genuine user and business needs, you may be stuck with a Frankensteinian monstrosity, assembled from incompatible parts: “This header plus the whizzy bit from Version C”. This isn’t to say that you shouldn’t explore lots of ideas yourself. Divergent thinking early in the design process is the only way to break free of the clichéd patterns and fads that so often litter mediocre sites. But you must act as a design curator, choosing the best of your work and explaining its rationale clearly and succinctly. Attitude, then, is central to successful critique. It can be difficult to tread the fine line between the harmful extremes of doormat passivity and prima donna arrogance. Remember that you are the professional, but be mindful that even experts make mistakes, particularly when – as with all design projects – they don’t possess all the relevant information in advance. Present your case with open-minded confidence, while accepting that positive critique will make your design (and ultimately your skills) stronger. The courage of your convictions Ultimately, your success in the feedback process, and indeed in the entire design process, hinges upon the rationale you provide for your work. Ideally, you should be able to refer to your research – personas, usability test findings, analytics – to support your decisions. To keep this evidence in mind, print it out to share at the design review, or include it in your presentation. Explain the rationale behind the most important decisions before showing the design, so that you can be sure of the full attention of your audience. Once you’ve covered these points, display your design and walk through the specific features of the page. A little honesty goes a long way here: state your case as strongly as your rationale demands. Sure of your reasoning? Be strong. Speculating an approach based on a hunch? Say so, and encourage your colleagues to explore the idea with you and see where it leads. Of course, none of these approaches should be sacrosanct. A proficient designer must be able to bend his or her way of working to suit the situation at hand. So sometimes you’ll want to ignore these rules of thumb and explore your own hunches as required. More power to you. As long as you think as clearly about the feedback process as you have about the design itself, you’ll be able to enjoy the great unveiling as a moment to be savoured, not feared. 2010 Cennydd Bowles cennyddbowles 2010-12-12T00:00:00+00:00 https://24ways.org/2010/the-great-unveiling/ business
8 Coding Towards Accessibility “Can we make it AAA-compliant?” – does this question strike fear into your heart? Maybe for no other reason than because you will soon have to wade through the impenetrable WCAG documentation once again, to find out exactly what AAA-compliant means? I’m not here to talk about that. The Web Content Accessibility Guidelines are a comprehensive and peer-reviewed resource which we’re lucky to have at our fingertips. But they are also a pig to read, and they may have contributed to the sense of mystery and dread with which some developers associate the word accessibility. This Christmas, I want to share with you some thoughts and some practical tips for building accessible interfaces which you can start using today, without having to do a ton of reading or changing your tools and workflow. But first, let’s clear up a couple of misconceptions. Dreary, flat experiences I recently built a front-end framework for the Post Office. This was a great gig for a developer, but when I found out about my client’s stringent accessibility requirements I was concerned that I’d have to scale back what was quite a complex set of visual designs. Sites like Jakob Neilsen’s old workhorse useit.com and even the pioneering GOV.UK may have to shoulder some of the blame for this. They put a premium on usability and accessibility over visual flourish. (Although, in fairness to Mr Neilsen, his new site nngroup.com is really quite a snazzy affair, comparatively.) Of course, there are other reasons for these sites’ aesthetics — and it’s not because of the limitations of the form. You can make an accessible site look as glossy or as plain as you want it to look. It’s always our own ingenuity and attention to detail that are going to be the limiting factors. Synecdoche We must always guard against the tendency to assume that catering to screen readers means we have the whole accessibility ballgame covered. There’s so much more to accessibility than assistive technology, as you know. And within the field of assistive technology there are plenty of other devices for us to consider. Planning to accommodate all these users and devices can be daunting. When I first started working in this field I thought that the breadth of technology was prohibitive. I didn’t even know what a screen reader looked like. (I assumed they were big and heavy, perhaps like an old typewriter, and certainly they would be expensive and difficult to fathom.) This is nonsense, of course. Screen reader emulators are readily available as browser extensions and can be activated in seconds. Chromevox and Fangs are both excellent and you should download one or the other right now. But the really good news is that you can emulate many other types of assistive technology without downloading a byte. And this is where we move from misconceptions into some (hopefully) useful advice. The mouse trap The simplest and most effective way to improve your abilities as a developer of accessible interfaces is to unplug your mouse. Keyboard operation has its own WCAG chapter, because most users of assistive technology are navigating the web using only their keyboards. You can go some way towards putting yourself into their shoes so easily — just by ditching a peripheral. Learning this was a lightbulb moment for me. When I build interfaces I am constantly flicking between code and the browser, testing or viewing the changes I have made. Now, instead of checking a new element once, I check it twice: once with my mouse and then again without. Don’t just :hover The reality is that when you first start doing this you can find your site becomes unusable straightaway. It’s easy to lose track of which element is in focus as you hit the tab key repeatedly. One of the easiest changes you can make to your coding practice is to add :focus and :active pseudo-classes to every hover state that you write. I’m still amazed at how many sites fail to provide a decent focus state for links (and despite previous 24 ways authors in 2007 and 2009 writing on this same issue!). You may find that in some cases it makes sense to have something other than, or in addition to, the hover state on focus, but start with the hover state that your designer has taken the time to provide you with. It’s a tiny change and there is no downside. So instead of this: .my-cool-link:hover { background-color: MistyRose ; } …try writing this: .my-cool-link:hover, .my-cool-link:focus, .my-cool-link:active { background-color: MistyRose ; } I’ve toyed with the idea of making a Sass mixin to take care of this for me, but I haven’t yet. I worry that people reading my code won’t see that I’m explicitly defining my focus and active states so I take the hit and write my hover rules out longhand. JavaScript can play, too This was another revelation for me. Keyboard-only navigation doesn’t necessitate a JavaScript-free experience, and up-to-date screen readers can execute JavaScript. So we’re able to create complex JavaScript-driven interfaces which all users can interact with. Some of the hard work has already been done for us. First, there are already conventions around keyboard-driven interfaces. Think about the last time you viewed a photo album on Facebook. You can use the arrow keys to switch between photos, and the escape key closes whichever lightbox-y UI thing Facebook is showing its photos in this week. Arrow keys (up/down as well as left/right) for progression through content; Escape to back out of something; Enter or space bar to indicate a positive intention — these are established keyboard conventions which we can apply to our interfaces to improve their accessiblity. Of course, by doing so we are improving our interfaces in general, giving all users the option to switch between keyboard and mouse actions as and when it suits them. Second, this guy wants to help you out. Hans Hillen is a developer who has done a great deal of work around accessibility and JavaScript-powered interfaces. Along with The Paciello Group he has created a version of the jQuery UI library which has been fully optimised for keyboard navigation and screen reader use. It’s a fantastic reference which I revisit all the time I’m not a huge fan of the jQuery UI library. It’s a pain to style and the code is a bit bloated. So I’ve not used this demo as a code resource to copy wholesale. I use it by playing with the various components and seeing how they react to keyboard controls. Each component is also fully marked up with the relevant ARIA roles to improve screen reader announcement where possible (more on this below). Coding for accessibility promotes good habits This is a another observation around accessibility and JavaScript. I noticed an improvement in the structure and abstraction of my code when I started adding keyboard controls to my interface elements. Your code has to become more modular and event-driven, because any number of events could trigger the same interaction. A mouse-click, the Enter key and the space bar could all conceivably trigger the same open function on a collapsed accordion element. (And you want to keep things DRY, don’t you?) If you aren’t already in the habit of separating out your interface functionality into discrete functions, you will be soon. var doSomethingCool = function(){ // Do something cool here. } // Bind function to a button click - pretty vanilla $('.myCoolButton').on('click', function(){ doSomethingCool(); return false; }); // Bind the same function to a range of keypresses $(document).keyup(function(e){ switch(e.keyCode) { case 13: // enter case 32: // spacebar doSomethingCool(); break; case 27: // escape doSomethingElse(); break; } }); To be honest, if you’re doing complex UI stuff with JavaScript these days, or if you’ve been building any responsive interfaces which rely on JavaScript, then you are most likely working with an application framework such as Backbone, Angular or Ember, so an abstraced and event-driven application structure will be familar to you. It should be super easy for you to start helping out your keyboard-only users if you aren’t already — just add a few more event bindings into your UI layer! Manipulating the tab order So, you’ve adjusted your mindset and now you test every change to your codebase using a keyboard as well as a mouse. You’ve applied all your hover states to :focus and :active so you can see where you’re tabbing on the page, and your interactive components react seamlessly to a mixture of mouse and keyboard commands. Feels good, right? There’s another level of optimisation to consider: manipulating the tab order. Certain DOM elements are naturally part of the tab order, and others are excluded. Links and input elements are the main elements included in the tab order, and static elements like paragraphs and headings are excluded. What if you want to make a static element ‘tabbable’? A good example would be in an expandable accordion component. Each section of the accordion should be separated by a heading, and there’s no reason to make that heading into a link simply because it’s interactive. <div class="accordion-widget"> <h3>Tyrannosaurus</h3> <p>Tyrannosaurus; meaning "tyrant lizard"...<p> <h3>Utahraptor</h3> <p>Utahraptor is a genus of theropod dinosaurs...<p> <h3>Dromiceiomimus</h3> <p>Ornithomimus is a genus of ornithomimid dinosaurs...<p> </div> Adding the heading elements to the tab order is trivial. We just set their tabindex attribute to zero. You could do this on the server or the client. I prefer to do it with JavaScript as part of the accordion setup and initialisation process. $('.accordion-widget h3').attr('tabindex', '0'); You can apply this trick in reverse and take elements out of the tab order by setting their tabindex attribute to −1, or change the tab order completely by using other integers. This should be done with great care, if at all. You have to be sure that the markup you remove from the tab order comes out because it genuinely improves the keyboard interaction experience. This is hard to validate without user testing. The danger is that developers will try to sweep complicated parts of the UI under the carpet by taking them out of the tab order. This would be considered a dark pattern — at least on my team! A farewell ARIA This is where things can get complex, and I’m no expert on the ARIA specification: I feel like I’ve only dipped my toe into this aspect of coding for accessibility. But, as with WCAG, I’d like to demystify things a little bit to encourage you to look into this area further yourself. ARIA roles are of most benefit to screen reader users, because they modify and augment screen reader announcements. Let’s take our dinosaur accordion from the previous section. The markup is semantic, so a screen reader that can’t handle JavaScript will announce all the content within the accordion, no problem. But modern screen readers can deal with JavaScript, and this means that all the lovely dino information beneath each heading has probably been hidden on document.ready, when the accordion initialised. It might have been hidden using display:none, which prevents a screen reader from announcing content. If that’s as far as you have gone, then you’ve committed an accessibility sin by hiding content from screen readers. Your user will hear a set of headings being announced, with no content in between. It would sound something like this if you were using Chromevox: > Tyrannosaurus. Heading Three. > Utahraptor. Heading Three. > Dromiceiomimus. Heading Three. We can add some ARIA magic to the markup to improve this, using the tablist role. Start by adding a role of tablist to the widget, and roles of tab and tabpanel to the headings and paragraphs respectively. Set boolean values for aria-selected, aria-hidden and aria-expanded. The markup could end up looking something like this. <div class="accordion-widget" role="tablist"> <!-- T-rex --> <h3 role="tab" tabindex="0" id="tab-2" aria-controls="panel-2" aria-selected="false">Utahraptor</h3> <p role="tabpanel" id="panel-2" aria-labelledby="tab-2" aria-expanded="false" aria-hidden="true">Utahraptor is a genus of theropod dinosaurs...</p> <!-- Dromiceiomimus --> </div> Now, if a screen reader user encounters this markup they will hear the following: > Tyrannosaurus. Tab not selected; one of three. > Utahraptor. Tab not selected; two of three. > Dromiceiomimus. Tab not selected; three of three. You could add arrow key events to help the user browse up and down the tab list items until they find one they like. Your accordion open() function should update the ARIA boolean values as well as adding whatever classes and animations you have built in as standard. Your users know that unselected tabs are meant to be interacted with, so if a user triggers the open function (say, by hitting Enter or the space bar on the second item) they will hear this: > Utahraptor. Selected; two of three. The paragraph element for the expanded item will not be hidden by your CSS, which means it will be announced as normal by the screen reader. This kind of thing makes so much more sense when you have a working example to play with. Again, I refer you to the fantastic resource that Hans Hillen has put together: this is his take on an accessible accordion, on which much of my example is based. Conclusion Getting complex interfaces right for all of your users can be difficult — there’s no point pretending otherwise. And there’s no substitute for user testing with real users who navigate the web using assistive technology every day. This kind of testing can be time-consuming to recruit for and to conduct. On top of this, we now have accessibility on mobile devices to contend with. That’s a huge area in itself, and it’s one which I have not yet had a chance to research properly. So, there’s lots to learn, and there’s lots to do to get it right. But don’t be disheartened. If you have read this far then I’ll leave you with one final piece of advice: don’t wait. Don’t wait until you’re building a site which mandates AAA-compliance to try this stuff out. Don’t wait for a client with the will or the budget to conduct the full spectrum of user testing to come along. Unplug your mouse, and start playing with your interfaces in a new way. You’ll be surprised at the things that you learn and the issues you uncover. And the next time an true accessibility project comes along, you will be way ahead of the game. 2013 Charlie Perrins charlieperrins 2013-12-03T00:00:00+00:00 https://24ways.org/2013/coding-towards-accessibility/ code
45 Is Agile Harder for Agencies? I once sat in a pitch meeting and watched a new business exec tell a potential client that his agency followed an agile workflow process at all times. The potential client nodded wisely, and they both agreed that agile was indeed the way to go. The meeting progressed and they signed off on a contract for a massive project, to be delivered in a standard waterfall fashion, with all manner of phases and key deliverables. Of course both of them left the meeting perfectly happy, because neither of them knew nor cared what an agile workflow process might be. That was about five years ago. As 2015 heaves into view I think it’s fair to say that attitudes have changed. Perhaps the same number of people claim to do Agile™ now as in 2010, but I think more of them are telling the truth. As a developer in an agency that works primarily with larger organisations, this year I have started to see a shift from agencies pushing agile methodologies with their clients, to clients requesting and even demanding agile practices from their agencies. Only a couple of years ago this would have been unusual behaviour. So what’s the problem? We should be happy then, no? Those of us in agencies will get to spend more time delivering great products, and less time arguing over out-of-date functional specs or battling through an adversarial change management procedure because somebody had a good idea during development rather than planning. We get to be a little bit more like our brothers and sisters in vaunted teams like the Government Digital Service, which is using agile approaches to great effect on projects that have a real benefit to their users. Almost. Unfortunately, it seems to be the case that adhering to an agile framework such as scrum is more difficult within an agency/client structure than it is for an in-house development team. This is no surprise. The Agile Manifesto was written in 2001 by a group of software developers for their own use. Many of the underlying principles of a framework like Scrum assume the existence of an in-house team, working on a highly technical project, and working for the business that employs them. The agency/client model must to some extent be retrofitted into agile frameworks. It can be done though, and there are plenty of agencies out there doing it well. This article isn’t meant to be another introduction to agile techniques – there are too many of those online already. This article is for people just dipping their toes into this way of working. I’ve laid out a few of the key reasons why adopting a more fully agile approach seems difficult, at least initially, for those of us working in agencies. 1. Agile asks more of your clients When a team adopts Scrum everyone has to get used to a number of unfamiliar roles and rituals. Few team members have a steeper learning curve than the person designated as the product owner. The product owner carries a lot of weight on their shoulders. They have to uphold the overall vision for the project. They are also meant to be the primary author of the project’s user stories (short atomic descriptions of project features which are testable and relate to a real business need). They should own this list of stories (called a backlog) and should be able to prioritise the order in which the stories are developed, to ensure that the project is delivering real value to the business early and often. When a burst of work is completed (bursts of work in Scrum are called sprints), the product owner leads a review or show-and-tell session with the wider project stakeholders. The product owner needs to understand the work that has been completed, and must champion it to the business. Finally, and most importantly, the product owner is responsible for managing the feedback and requests from stakeholders in such a way that they don’t derail the project team’s agreed workload for any given sprint, without upsetting or offending any of the stakeholders – some of whom may outrank the product owner. If you follow that spec, this is a job for a superhuman in any organisational context. And within the agency/client structure this superhuman needs to be client-side for the process to be at its most effective. So your client, who in the past might have briefed a project to an agency team and then had the work presented back to them every few weeks, is now asked to be involved with the team on a daily basis; to fight on behalf of the team when new or difficult requests come in from senior figures within their organisation; and to present the agency’s work to their own colleagues after each sprint. It’s a big change if all that gets dropped into someone’s lap without warning. There are several ways agencies can mitigate this issue. The ScrumAlliance suggests some alternative ways to structure the product owner role. The approach I have taken in the past is simply to start slow, and gradually move more of the product owner role over to the client side as and when they feel comfortable with it. If you’re working together long-term on a project, and you both see tangible improvements in the quality of the work after adopting an agile process, then your client is more likely to be open to further changes as the partnership progresses. 2. My client wants fixed costs, fixed deadlines and a fixed scope I know. Mine too. Of course they do – it is the way that agencies and clients have agreed to work in digital and other creative service industries for a very long time. On both sides of the fence we’re used to thinking about projects in this way. Of the three, fixing scope is the one that agile purists would rail hardest against. The more time we spend working on digital projects, the less sense it makes. James Archer, CEO of UI/UX design agency Forty puts it like this: For me, the Agile approach is really about acknowledging that disturbing truth that every project manager knows, but has trouble admitting. The truth that the project plan is wrong. Scope creep. Change orders. Shifting priorities. New directions. We act shocked and appalled when those things happen during our carefully planned project, even though they happen on every project ever. Successful relationships require trust and honesty, and we shouldn’t be afraid of discussing this aspect of project management. If you do move away from a fixed scope of work, then the other two items (costs and timings) can be fixed – more or less. If you can get your clients to buy into this from a standing start then you are doing well. In fact you probably deserve a promotion. For most of us this is a continual discussion. Anyway, as soon as you’ve made headway on the argument that it makes little or no sense to try and fix the scope of a digital project, you usually run into a related concern, which we’ll look at next. 3. Fear of uncontrolled costs We all know that a dog is for life, not just for Christmas. At this time of year perhaps we should reiterate to everyone that digital products and services also need support and love once we have taken the decision to bring them into the world. More organisations are realising that their investment in digital platforms should be viewed as an operational expenditure rather than a capital expenditure. But from time to time we will find ourselves working on projects for people who have a finite amount of money to invest in a product at a given point in time. When agencies start talking about these projects as rolling investments those responsible can understandably worry about their costs running out of control. There’s another factor at play here. Agile, on the whole, prefers to derive a cost for services from the hours a team spends working on a project. In other industries this is referred to as charging for time and materials, and there seems to be an ingrained distrust in this approach among people in general. See, for example, the Citizens Advice Bureau’s “Top tips for employing a builder”: “Bear in mind that if you pay a daily rate, this makes it easier for a builder to string the work out and get more money so agree what you will do if the job takes longer than expected.” It’s hard not to feel stung if you are in the builder’s shoes here, as we are when we’re talking about our role as an agency. But if you’ve ever haggled with a builder over time and materials, and also moaned about your clients misunderstanding agile methods, take a moment to reflect on the similarities from your client’s point of view. Again, there are some things we can do to mitigate this issue. Some agencies put in place a service level agreement around their team’s velocity (an agile-related term related to how much work a team delivers in any given sprint) and this can help. As the industry moves further towards a long-term approach to investment in digital I hope this fear will subside. But that shift in approach leads to the final concern I want to address. 4. Agency structures need shaking up If you work for a company that has spent many years developing a business model around the waterfall process, you may have to break through many layers of entrenched thinking in order to establish new practices and effect organisational change. There are consultancies that exist specifically to help agencies through their own agile transformation. One of these companies, AgencyAgile, provides a helpful list of common pitfalls. They emphasise the need to look at your whole agency’s structure, rather than simply encouraging project teams to adopt new workflows. Even awesomely run Agile projects can have a limited impact on the overall organization. If you’re serious about changing the way your company approaches projects then try talking to people who sit outside the usual project delivery team. Speak to the finance department if you have one, and try to convince your senior management team if they’re not already on board. And definitely speak to your new business people, who go out there and win the projects you get to work on. It’s these people who need to understand the potential business benefits of working in a new way, and also which of their existing habits and behaviours they might need to change to accommodate a new approach. Otherwise you’ll find yourself with a team of designers, developers and project managers who are ready and waiting to deliver work in an iterative and collaborative way, but by the time they get hold of the project a cost has already been agreed, a deadline has been imposed, and a functional requirements document has been painstakingly put together. Nobody wins in this situation. Conclusion So where should we go from here? I certainly don’t have hard and fast answers – I’m not sure that they exist in a one-size-fits-all approach for agencies. There are plenty of smart people thinking about this problem. It’s a hot topic right now. Earlier in the year a London-based meetup was established called Agile for Agencies. If you’re in the capital and want to discuss these issues with your peers it’s a great opportunity to do so. I’ve mentioned James Archer and Forty already. Both James and Paul Boag have written in the last twelve months on this subject. They both come out on the side of the argument that suggests you adopt agile principles, but don’t have to worry about the rituals if they don’t fit in with your practices. Personally, I think the rituals and the discipline mandated by an agile framework like Scrum can provide a great deal of value to your team, even it if is hard to implement within an agency culture that has traditionally structured its work and its services in another way. In whatever way you figure out the details, when your teams collaborate with your clients rather than work for them at arm’s length, and when everyone prioritises frequent delivery, reflection and iteration over exhaustive scoping and planning, I believe you’ll see a tangible difference in the quality of the work that you create. 2014 Charlie Perrins charlieperrins 2014-12-12T00:00:00+00:00 https://24ways.org/2014/is-agile-harder-for-agencies/ process
204 Cascading Web Design with Feature Queries Feature queries, also known as the @supports rule, were introduced as an extension to the CSS2 as part of the CSS Conditional Rules Module Level 3, which was first published as a working draft in 2011. It is a conditional group rule that tests if the browser’s user agent supports CSS property:value pairs, and arbitrary conjunctions (and), disjunctions (or), and negations (not) of them. The motivation behind this feature was to allow authors to write styles using new features when they were supported but degrade gracefully in browsers where they are not. Even though the nature of CSS already allows for graceful degradation, for example, by ignoring unsupported properties or values without disrupting other styles in the stylesheet, sometimes we need a bit more than that. CSS is ultimately a holistic technology, in that, even though you can use properties in isolation, the full power of CSS shines through when used in combination. This is especially evident when it comes to building web layouts. Having native feature detection in CSS makes it much more convenient to build with cutting-edge CSS for the latest browsers while supporting older browsers at the same time. Browser support Opera first implemented feature queries in November 2012, both Chrome and Firefox had it since May 2013. There have been several articles about feature queries written over the years, however, it seems that awareness of its broad support isn’t that well-known. Much of the earlier coverage on feature queries was not written in English, and perhaps that was a limiting factor. @supports ― CSSのFeature Queries by Masataka Yakura, August 8 2012 Native CSS Feature Detection via the @supports Rule by Chris Mills, December 21 2012 CSS @supports by David Walsh, April 3 2013 Responsive typography with CSS Feature Queries by Aral Balkan, April 9 2013 How to use the @supports rule in your CSS by Lea Verou, January 31 2014 CSS Feature Queries by Amit Tal, June 2 2014 Coming Soon: CSS Feature Queries by Adobe Web Platform Team, August 21 2014 CSS feature queries mittels @supports by Daniel Erlinger, November 27 2014 As of December 2017, all current major browsers and their previous 2 versions support feature queries. Feature queries are also supported on Opera Mini, UC Browser and Samsung Internet. The only browsers that do not support feature queries are Internet Explorer and Blackberry Mobile, but that may be less of an issue than you might think. Can I Use css-featurequeries? Data on support for the css-featurequeries feature across the major browsers from caniuse.com. Granted, there is still a significant number of organisations that require support of Internet Explorer. Microsoft still continues to support IE11 for the life-cycle of Windows 7, 8 and 10. They have, however, stopped supporting older versions since January 12, 2016. It is inevitable that there will be organisations that, for some reason or another, make it mandatory to support IE, but as time goes on, this number will continue to shrink. Jen Simmons wrote an extensive article called Using Feature Queries in CSS which discussed a matrix of potential situations when it comes to the usage of feature queries. The following image is a summary of the aforementioned matrix. The most tricky situation we have to deal with is the box in the top-left corner, which are “browsers that don’t support feature queries, yet do support the feature in question”. For cases like those, it really depends on the specific CSS feature you want to use and a subsequent evaluation of the pros and cons of not including that feature in spite of the fact the browser (most likely Internet Explorer) supports it. The basics of feature queries As with any conditional, feature queries operate on boolean logic, in other words, if the query resolves to true, apply the CSS properties within the block, or else just ignore the entire block altogether. The syntax of a simple feature query is as follows: .selector { /* Styles that are supported in old browsers */ } @supports (property:value) { .selector { /* Styles for browsers that support the specified property */ } } Note that the parentheses around the property:value pair are mandatory and the rule is invalid without them. Styles that apply to older browsers, i.e. fallback styles, should come first, followed by the newer properties, which are contained within the @supports conditional. Because of the cascade, fallback styles will be overridden by the newer properties in the modern browsers that support them. main { background-color: red; } @supports (display:grid) { main { background-color: green; } } In this example, browsers that support CSS grid will have a main element with a green background colour because the conditional resolves to true, while browsers that do not support grid will have a main element with a red background colour. The implication of such behaviour means that we can layer on enhanced styles based on the features we want to use and these styles will show up in browsers that support them. But for those that do not, they will get a more basic look that still works anyway. And that will be our approach moving forward. Boolean operators for feature queries The and operator allows us to test for support of multiple properties within a single conditional. This would be useful for cases where the desired output requires multiple cutting-edge features to be supported at the same time to work. All the property:value pairs listed in the conditional must resolve to true for the styles within the rule to be applied. @supports (transform: rotate(45deg)) and (writing-mode: vertical-rl) { /* Some CSS styles */ } The or operator allows us to list multiple property:value pairs in the conditional and as long as one of them resolves to true, the styles within the block will be applied. A relevant use-case would be for properties with vendor-prefixes. @supports (background: -webkit-gradient(linear, left top, left bottom, from(white), to(black))) or (background: -o-linear-gradient(top, white, black)) or (background: linear-gradient(to bottom, white, black)) { /* Some CSS styles */ } The not operator negates the resolution of the property:value pair in the conditional, resolving to false when the property is supported and vice versa. This is useful when there are two distinct sets of styles to be applied depending on the support of a specific feature. However, we do need to keep in mind the case where a browser does not support feature queries, and handle the styles for those browsers accordingly. @supports not (shape-outside: polygon(100% 80%,20% 0,100% 0)) { /* Some CSS styles */ } To avoid confusion between and and or, these operators must be explicitly declared as opposed to using commas or spaces. To prevent confusion caused by precedence rules, and, or and not operators cannot be mixed without a layer of parentheses. This rule is not valid and the styles within the block will be ignored. @supports (transition-property: background-color) or (animation-name: fade) and (transform: scale(1.5)) { /* Some CSS styles */ } To make it work, parentheses must be added either around the two properties adjacent to the or or the and operator like so: @supports ((transition-property: background-color) or (animation-name: fade)) and (transform: scale(1.5)) { /* Some CSS styles */ } @supports (transition-property: background-color) or ((animation-name: fade) and (transform: scale(1.5))) { /* Some CSS styles */ } The current specification states that whitespace is required after a not and on both sides of an and or or, but this may change in a future version of the specification. It is acceptable to add extra parentheses even when they are not needed, but omission of parentheses is considered invalid. Cascading web design I’d like to introduce the concept of cascading web design, an approach made possible with feature queries. Browser update cycles are much shorter these days, so new features and bug fixes are being pushed out a lot more frequently as compared to the early days of the web. With the maturation of web standards, browser behaviour is less unpredictable than before, but each browser will still have their respective quirks. Chances are, the latest features will not ship across all browsers at the same time. But you know what? That’s perfectly fine. If we accept this as a feature of the web, instead of a bug, we’ve just opened up a lot more web design possibilities. The following example is a basic, responsive grid layout of items laid out with flexbox, as viewed on IE11. We can add a block of styles within an @supports rule to apply CSS grid properties for browsers that support them to enhance this layout, like so: The web is not a static medium. It is dynamic and interactive and we manipulate this medium by writing code to tell the browser what we want it to do. Rather than micromanaging the pixels in our designs, maybe it’s time we cede control of our designs to the browsers that render them. This means being okay with your designs looking different across browsers and devices. As mentioned earlier, CSS works best when various properties are combined. It’s one of those things whose whole is greater than the sum of its parts. So feature queries, when combined with media queries, allow us to design layouts that are most effective in the environment they have to perform in. Such an approach requires interpolative thinking, on multiple levels. As web designers and developers, we don’t just think in one fixed dimension, we get to think about how our design will morph on a narrow screen, or on an older browser, in addition to how it will appear on a browser with the latest features. In the following example, the layout on the left is what IE11 users will see, the one in the middle is what Firefox users will see, because Firefox doesn’t support CSS shapes yet, but once it does, it will then look like the layout on the right, which is what Chrome users see now. With the release of CSS Grid this year, we’ve hit another milestone in the evolution of the web as a medium. The beauty of the web is its backwards compatibility and generous fault tolerance. Browser features are largely additive, holding onto the good parts and building on top of them, while deprecating the bits that didn’t work well. Feature queries allow us to progressively enhance our CSS, establishing a basic level of user experience across the widest range of browsers, while building in more advanced functionality for browsers who can use them. And hopefully, this will allow more of us to create designs that truly embrace the nature of the web. 2017 Chen Hui Jing chenhuijing 2017-12-01T00:00:00+00:00 https://24ways.org/2017/cascading-web-design/ code
18 Grunt for People Who Think Things Like Grunt are Weird and Hard Front-end developers are often told to do certain things: Work in as small chunks of CSS and JavaScript as makes sense to you, then concatenate them together for the production website. Compress your CSS and minify your JavaScript to make their file sizes as small as possible for your production website. Optimize your images to reduce their file size without affecting quality. Use Sass for CSS authoring because of all the useful abstraction it allows. That’s not a comprehensive list of course, but those are the kind of things we need to do. You might call them tasks. I bet you’ve heard of Grunt. Well, Grunt is a task runner. Grunt can do all of those things for you. Once you’ve got it set up, which isn’t particularly difficult, those things can happen automatically without you having to think about them again. But let’s face it: Grunt is one of those fancy newfangled things that all the cool kids seem to be using but at first glance feels strange and intimidating. I hear you. This article is for you. Let’s nip some misconceptions in the bud right away Perhaps you’ve heard of Grunt, but haven’t done anything with it. I’m sure that applies to many of you. Maybe one of the following hang-ups applies to you. I don’t need the things Grunt does You probably do, actually. Check out that list up top. Those things aren’t nice-to-haves. They are pretty vital parts of website development these days. If you already do all of them, that’s awesome. Perhaps you use a variety of different tools to accomplish them. Grunt can help bring them under one roof, so to speak. If you don’t already do all of them, you probably should and Grunt can help. Then, once you are doing those, you can keep using Grunt to do more for you, which will basically make you better at doing your job. Grunt runs on Node.js — I don’t know Node You don’t have to know Node. Just like you don’t have to know Ruby to use Sass. Or PHP to use WordPress. Or C++ to use Microsoft Word. I have other ways to do the things Grunt could do for me Are they all organized in one place, configured to run automatically when needed, and shared among every single person working on that project? Unlikely, I’d venture. Grunt is a command line tool — I’m just a designer I’m a designer too. I prefer native apps with graphical interfaces when I can get them. But I don’t think that’s going to happen with Grunt1. The extent to which you need to use the command line is: Navigate to your project’s directory. Type grunt and press Return. After set-up, that is, which again isn’t particularly difficult. OK. Let’s get Grunt installed Node is indeed a prerequisite for Grunt. If you don’t have Node installed, don’t worry, it’s very easy. You literally download an installer and run it. Click the big Install button on the Node website. You install Grunt on a per-project basis. Go to your project’s folder. It needs a file there named package.json at the root level. You can just create one and put it there. package.json at root The contents of that file should be this: { "name": "example-project", "version": "0.1.0", "devDependencies": { "grunt": "~0.4.1" } } Feel free to change the name of the project and the version, but the devDependencies thing needs to be in there just like that. This is how Node does dependencies. Node has a package manager called NPM (Node packaged modules) for managing Node dependencies (like a gem for Ruby if you’re familiar with that). You could even think of it a bit like a plug-in for WordPress. Once that package.json file is in place, go to the terminal and navigate to your folder. Terminal rubes like me do it like this: Terminal rube changing directories Then run the command: npm install After you’ve run that command, a new folder called node_modules will show up in your project. Example of node_modules folder The other files you see there, README.md and LICENSE are there because I’m going to put this project on GitHub and that’s just standard fare there. The last installation step is to install the Grunt CLI (command line interface). That’s what makes the grunt command in the terminal work. Without it, typing grunt will net you a “Command Not Found”-style error. It is a separate installation for efficiency reasons. Otherwise, if you had ten projects you’d have ten copies of Grunt CLI. This is a one-liner again. Just run this command in the terminal: npm install -g grunt-cli You should close and reopen the terminal as well. That’s a generic good practice to make sure things are working right. Kinda like restarting your computer after you install a new application was in the olden days. Let’s make Grunt concatenate some files Perhaps in our project there are three separate JavaScript files: jquery.js – The library we are using. carousel.js – A jQuery plug-in we are using. global.js – Our authored JavaScript file where we configure and call the plug-in. In production, we would concatenate all those files together for performance reasons (one request is better than three). We need to tell Grunt to do this for us. But wait. Grunt actually doesn’t do anything all by itself. Remember Grunt is a task runner. The tasks themselves we will need to add. We actually haven’t set up Grunt to do anything yet, so let’s do that. The official Grunt plug-in for concatenating files is grunt-contrib-concat. You can read about it on GitHub if you want, but all you have to do to use it on your project is to run this command from the terminal (it will henceforth go without saying that you need to run the given commands from your project’s root folder): npm install grunt-contrib-concat --save-dev A neat thing about doing it this way: your package.json file will automatically be updated to include this new dependency. Open it up and check it out. You’ll see a new line: "grunt-contrib-concat": "~0.3.0" Now we’re ready to use it. To use it we need to start configuring Grunt and telling it what to do. You tell Grunt what to do via a configuration file named Gruntfile.js2 Just like our package.json file, our Gruntfile.js has a very special format that must be just right. I wouldn’t worry about what every word of this means. Just check out the format: module.exports = function(grunt) { // 1. All configuration goes here grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), concat: { // 2. Configuration for concatinating files goes here. } }); // 3. Where we tell Grunt we plan to use this plug-in. grunt.loadNpmTasks('grunt-contrib-concat'); // 4. Where we tell Grunt what to do when we type "grunt" into the terminal. grunt.registerTask('default', ['concat']); }; Now we need to create that configuration. The documentation can be overwhelming. Let’s focus just on the very simple usage example. Remember, we have three JavaScript files we’re trying to concatenate. We’ll list file paths to them under src in an array of file paths (as quoted strings) and then we’ll list a destination file as dest. The destination file doesn’t have to exist yet. It will be created when this task runs and squishes all the files together. Both our jquery.js and carousel.js files are libraries. We most likely won’t be touching them. So, for organization, we’ll keep them in a /js/libs/ folder. Our global.js file is where we write our own code, so that will be right in the /js/ folder. Now let’s tell Grunt to find all those files and squish them together into a single file named production.js, named that way to indicate it is for use on our real live website. concat: { dist: { src: [ 'js/libs/*.js', // All JS in the libs folder 'js/global.js' // This specific file ], dest: 'js/build/production.js', } } Note: throughout this article there will be little chunks of configuration code like above. The intention is to focus in on the important bits, but it can be confusing at first to see how a particular chunk fits into the larger file. If you ever get confused and need more context, refer to the complete file. With that concat configuration in place, head over to the terminal, run the command: grunt and watch it happen! production.js will be created and will be a perfect concatenation of our three files. This was a big aha! moment for me. Feel the power course through your veins. Let’s do more things! Let’s make Grunt minify that JavaScript We have so much prep work done now, adding new tasks for Grunt to run is relatively easy. We just need to: Find a Grunt plug-in to do what we want Learn the configuration style of that plug-in Write that configuration to work with our project The official plug-in for minifying code is grunt-contrib-uglify. Just like we did last time, we just run an NPM command to install it: npm install grunt-contrib-uglify --save-dev Then we alter our Gruntfile.js to load the plug-in: grunt.loadNpmTasks('grunt-contrib-uglify'); Then we configure it: uglify: { build: { src: 'js/build/production.js', dest: 'js/build/production.min.js' } } Let’s update that default task to also run minification: grunt.registerTask('default', ['concat', 'uglify']); Super-similar to the concatenation set-up, right? Run grunt at the terminal and you’ll get some deliciously minified JavaScript: Minified JavaScript That production.min.js file is what we would load up for use in our index.html file. Let’s make Grunt optimize our images We’ve got this down pat now. Let’s just go through the motions. The official image minification plug-in for Grunt is grunt-contrib-imagemin. Install it: npm install grunt-contrib-imagemin --save-dev Register it in the Gruntfile.js: grunt.loadNpmTasks('grunt-contrib-imagemin'); Configure it: imagemin: { dynamic: { files: [{ expand: true, cwd: 'images/', src: ['**/*.{png,jpg,gif}'], dest: 'images/build/' }] } } Make sure it runs: grunt.registerTask('default', ['concat', 'uglify', 'imagemin']); Run grunt and watch that gorgeous squishification happen: Squished images Gotta love performance increases for nearly zero effort. Let’s get a little bit smarter and automate What we’ve done so far is awesome and incredibly useful. But there are a couple of things we can get smarter on and make things easier on ourselves, as well as Grunt: Run these tasks automatically when they should Run only the tasks needed at the time For instance: Concatenate and minify JavaScript when JavaScript changes Optimize images when a new image is added or an existing one changes We can do this by watching files. We can tell Grunt to keep an eye out for changes to specific places and, when changes happen in those places, run specific tasks. Watching happens through the official grunt-contrib-watch plugin. I’ll let you install it. It is exactly the same process as the last few plug-ins we installed. We configure it by giving watch specific files (or folders, or both) to watch. By watch, I mean monitor for file changes, file deletions or file additions. Then we tell it what tasks we want to run when it detects a change. We want to run our concatenation and minification when anything in the /js/ folder changes. When it does, we should run the JavaScript-related tasks. And when things happen elsewhere, we should not run the JavaScript-related tasks, because that would be irrelevant. So: watch: { scripts: { files: ['js/*.js'], tasks: ['concat', 'uglify'], options: { spawn: false, }, } } Feels pretty comfortable at this point, hey? The only weird bit there is the spawn thing. And you know what? I don’t even really know what that does. From what I understand from the documentation it is the smart default. That’s real-world development. Just leave it alone if it’s working and if it’s not, learn more. Note: Isn’t it frustrating when something that looks so easy in a tutorial doesn’t seem to work for you? If you can’t get Grunt to run after making a change, it’s very likely to be a syntax error in your Gruntfile.js. That might look like this in the terminal: Errors running Grunt Usually Grunt is pretty good about letting you know what happened, so be sure to read the error message. In this case, a syntax error in the form of a missing comma foiled me. Adding the comma allowed it to run. Let’s make Grunt do our preprocessing The last thing on our list from the top of the article is using Sass — yet another task Grunt is well-suited to run for us. But wait? Isn’t Sass technically in Ruby? Indeed it is. There is a version of Sass that will run in Node and thus not add an additional dependency to our project, but it’s not quite up-to-snuff with the main Ruby project. So, we’ll use the official grunt-contrib-sass plug-in which just assumes you have Sass installed on your machine. If you don’t, follow the command line instructions. What’s neat about Sass is that it can do concatenation and minification all by itself. So for our little project we can just have it compile our main global.scss file: sass: { dist: { options: { style: 'compressed' }, files: { 'css/build/global.css': 'css/global.scss' } } } We wouldn’t want to manually run this task. We already have the watch plug-in installed, so let’s use it! Within the watch configuration, we’ll add another subtask: css: { files: ['css/*.scss'], tasks: ['sass'], options: { spawn: false, } } That’ll do it. Now, every time we change any of our Sass files, the CSS will automaticaly be updated. Let’s take this one step further (it’s absolutely worth it) and add LiveReload. With LiveReload, you won’t have to go back to your browser and refresh the page. Page refreshes happen automatically and in the case of CSS, new styles are injected without a page refresh (handy for heavily state-based websites). It’s very easy to set up, since the LiveReload ability is built into the watch plug-in. We just need to: Install the browser plug-in Add to the top of the watch configuration: . watch: { options: { livereload: true, }, scripts: { /* etc */ Restart the browser and click the LiveReload icon to activate it. Update some Sass and watch it change the page automatically. Live reloading browser Yum. Prefer a video? If you’re the type that likes to learn by watching, I’ve made a screencast to accompany this article that I’ve published over on CSS-Tricks: First Moments with Grunt Leveling up As you might imagine, there is a lot of leveling up you can do with your build process. It surely could be a full time job in some organizations. Some hardcore devops nerds might scoff at the simplistic setup we have going here. But I’d advise them to slow their roll. Even what we have done so far is tremendously valuable. And don’t forget this is all free and open source, which is amazing. You might level up by adding more useful tasks: Running your CSS through Autoprefixer (A+ Would recommend) instead of a preprocessor add-ons. Writing and running JavaScript unit tests (example: Jasmine). Build your image sprites and SVG icons automatically (example: Grunticon). Start a server, so you can link to assets with proper file paths and use services that require a real URL like TypeKit and such, as well as remove the need for other tools that do this, like MAMP. Check for code problems with HTML-Inspector, CSS Lint, or JS Hint. Have new CSS be automatically injected into the browser when it ever changes. Help you commit or push to a version control repository like GitHub. Add version numbers to your assets (cache busting). Help you deploy to a staging or production environment (example: DPLOY). You might level up by simply understanding more about Grunt itself: Read Grunt Boilerplate by Mark McDonnell. Read Grunt Tips and Tricks by Nicolas Bevacqua. Organize your Gruntfile.js by splitting it up into smaller files. Check out other people’s and projects’ Gruntfile.js. Learn more about Grunt by digging into its source and learning about its API. Let’s share I think some group sharing would be a nice way to wrap this up. If you are installing Grunt for the first time (or remember doing that), be especially mindful of little frustrating things you experience(d) but work(ed) through. Those are the things we should share in the comments here. That way we have this safe place and useful resource for working through those confusing moments without the embarrassment. We’re all in this thing together! 1 Maybe someday someone will make a beautiful Grunt app for your operating system of choice. But I’m not sure that day will come. The configuration of the plug-ins is the important part of using Grunt. Each plug-in is a bit different, depending on what it does. That means a uniquely considered UI for every single plug-in, which is a long shot. Perhaps a decent middleground is this Grunt DevTools Chrome add-on. 2 Gruntfile.js is often referred to as Gruntfile in documentation and examples. Don’t literally name it Gruntfile — it won’t work. 2013 Chris Coyier chriscoyier 2013-12-11T00:00:00+00:00 https://24ways.org/2013/grunt-is-not-weird-and-hard/ code
104 Sitewide Search On A Shoe String One of the questions I got a lot when I was building web sites for smaller businesses was if I could create a search engine for their site. Visitors should be able to search only this site and find things without the maintainer having to put “related articles” or “featured content” links on every page by hand. Back when this was all fields this wasn’t easy as you either had to write your own scraping tool, use ht://dig or a paid service from providers like Yahoo, Altavista or later on Google. In the former case you had to swallow the bitter pill of computing and indexing all your content and storing it in a database for quick access and in the latter it hurt your wallet. Times have moved on and nowadays you can have the same functionality for free using Yahoo’s “Build your own search service” – BOSS. The cool thing about BOSS is that it allows for a massive amount of hits a day and you can mash up the returned data in any format you want. Another good feature of it is that it comes with JSON-P as an output format which makes it possible to use it without any server-side component! Starting with a working HTML form In order to add a search to your site, you start with a simple HTML form which you can use without JavaScript. Most search engines will allow you to filter results by domain. In this case we will search “bbc.co.uk”. If you use Yahoo as your standard search, this could be: <form id="customsearch" action="http://search.yahoo.com/search"> <div> <label for="p">Search this site:</label> <input type="text" name="p" id="term"> <input type="hidden" name="vs" id="site" value="bbc.co.uk"> <input type="submit" value="go"> </div> </form> The Google equivalent is: <form id="customsearch" action="http://www.google.co.uk/search"> <div> <label for="p">Search this site:</label> <input type="text" name="as_q" id="term"> <input type="hidden" name="as_sitesearch" id="site" value="bbc.co.uk"> <input type="submit" value="go"> </div> </form> In any case make sure to use the ID term for the search term and site for the site, as this is what we are going to use for the script. To make things easier, also have an ID called customsearch on the form. To use BOSS, you should get your own developer API for BOSS and replace the one in the demo code. There is click tracking on the search results to see how successful your app is, so you should make it your own. Adding the BOSS magic BOSS is a REST API, meaning you can use it in any HTTP request or in a browser by simply adding the right parameters to a URL. Say for example you want to search “bbc.co.uk” for “christmas” all you need to do is open the following URL: http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&format=xml&appid=YOUR-APPLICATION-ID Try it out and click it to see the results in XML. We don’t want XML though, which is why we get rid of the format=xml parameter which gives us the same information in JSON: http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&appid=YOUR-APPLICATION-ID JSON makes most sense when you can send the output to a function and immediately use it. For this to happen all you need is to add a callback parameter and the JSON will be wrapped in a function call. Say for example we want to call SITESEARCH.found() when the data was retrieved we can do it this way: http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&callback=SITESEARCH.found&appid=YOUR-APPLICATION-ID You can use this immediately in a script node if you want to. The following code would display the total amount of search results for the term christmas on bbc.co.uk as an alert: <script type="text/javascript"> var SITESEARCH = {}; SITESEARCH.found = function(o){ alert(o.ysearchresponse.totalhits); } </script> <script type="text/javascript" src="http://boss.yahooapis.com/ysearch/web/v1/christmas?sites=bbc.co.uk&callback=SITESEARCH.found&appid=Kzv_lcHV34HIybw0GjVkQNnw4AEXeyJ9Rb1gCZSGxSRNrcif_HdMT9qTE1y9LdI-"> </script> However, for our example, we need to be a bit more clever with this. Enhancing the search form Here’s the script that enhances a search form to show results below it. SITESEARCH = function(){ var config = { IDs:{ searchForm:'customsearch', term:'term', site:'site' }, loading:'Loading results...', noresults:'No results found.', appID:'YOUR-APP-ID', results:20 }; var form; var out; function init(){ if(config.appID === 'YOUR-APP-ID'){ alert('Please get a real application ID!'); } else { form = document.getElementById(config.IDs.searchForm); if(form){ form.onsubmit = function(){ var site = document.getElementById(config.IDs.site).value; var term = document.getElementById(config.IDs.term).value; if(typeof site === 'string' && typeof term === 'string'){ if(typeof out !== 'undefined'){ out.parentNode.removeChild(out); } out = document.createElement('p'); out.appendChild(document.createTextNode(config.loading)); form.appendChild(out); var APIurl = 'http://boss.yahooapis.com/ysearch/web/v1/' + term + '?callback=SITESEARCH.found&sites=' + site + '&count=' + config.results + '&appid=' + config.appID; var s = document.createElement('script'); s.setAttribute('src',APIurl); s.setAttribute('type','text/javascript'); document.getElementsByTagName('head')[0].appendChild(s); return false; } }; } } }; function found(o){ var list = document.createElement('ul'); var results = o.ysearchresponse.resultset_web; if(results){ var item,link,description; for(var i=0,j=results.length;i<j;i++){ item = document.createElement('li'); link = document.createElement('a'); link.setAttribute('href',results[i].clickurl); link.innerHTML = results[i].title; item.appendChild(link); description = document.createElement('p'); description.innerHTML = results[i]['abstract']; item.appendChild(description); list.appendChild(item); } } else { list = document.createElement('p'); list.appendChild(document.createTextNode(config.noresults)); } form.replaceChild(list,out); out = list; }; return{ config:config, init:init, found:found }; }(); Oooohhhh scary code! Let’s go through this one bit at a time: We start by creating a module called SITESEARCH and give it an configuration object: SITESEARCH = function(){ var config = { IDs:{ searchForm:'customsearch', term:'term', site:'site' }, loading:'Loading results...', appID:'YOUR-APP-ID', results:20 } Configuration objects are a great idea to make your code easy to change and also to override. In this case you can define different IDs than the one agreed upon earlier, define a message to show when the results are loading, when there aren’t any results, the application ID and the number of results that should be displayed. Note: you need to replace “YOUR-APP-ID” with the real ID you retrieved from BOSS, otherwise the script will complain! var form; var out; function init(){ if(config.appID === 'YOUR-APP-ID'){ alert('Please get a real application ID!'); } else { We define form and out as variables to make sure that all the methods in the module have access to them. We then check if there was a real application ID defined. If there wasn’t, the script complains and that’s that. form = document.getElementById(config.IDs.searchForm); if(form){ form.onsubmit = function(){ var site = document.getElementById(config.IDs.site).value; var term = document.getElementById(config.IDs.term).value; if(typeof site === 'string' && typeof term === 'string'){ If the application ID was a winner, we check if the form with the provided ID exists and apply an onsubmit event handler. The first thing we get is the values of the site we want to search in and the term that was entered and check that those are strings. if(typeof out !== 'undefined'){ out.parentNode.removeChild(out); } out = document.createElement('p'); out.appendChild(document.createTextNode(config.loading)); form.appendChild(out); If both are strings we check of out is undefined. We will create a loading message and subsequently the list of search results later on and store them in this variable. So if out is defined, it’ll be an old version of a search (as users will re-submit the form over and over again) and we need to remove that old version. We then create a paragraph with the loading message and append it to the form. var APIurl = 'http://boss.yahooapis.com/ysearch/web/v1/' + term + '?callback=SITESEARCH.found&sites=' + site + '&count=' + config.results + '&appid=' + config.appID; var s = document.createElement('script'); s.setAttribute('src',APIurl); s.setAttribute('type','text/javascript'); document.getElementsByTagName('head')[0].appendChild(s); return false; } }; } } }; Now it is time to call the BOSS API by assembling a correct REST URL, create a script node and apply it to the head of the document. We return false to ensure the form does not get submitted as we want to stay on the page. Notice that we are using SITESEARCH.found as the callback method, which means that we need to define this one to deal with the data returned by the API. function found(o){ var list = document.createElement('ul'); var results = o.ysearchresponse.resultset_web; if(results){ var item,link,description; We create a new list and then get the resultset_web array from the data returned from the API. If there aren’t any results returned, this array will not exist which is why we need to check for it. Once we done that we can define three variables to repeatedly store the item title we want to display, the link to point to and the description of the link. for(var i=0,j=results.length;i<j;i++){ item = document.createElement('li'); link = document.createElement('a'); link.setAttribute('href',results[i].clickurl); link.innerHTML = results[i].title; item.appendChild(link); description = document.createElement('p'); description.innerHTML = results[i]['abstract']; item.appendChild(description); list.appendChild(item); } We then loop over the results array and assemble a list of results with the titles in links and paragraphs with the abstract of the site. Notice the bracket notation for abstract as abstract is a reserved word in JavaScript2 :). } else { list = document.createElement('p'); list.appendChild(document.createTextNode(config.noresults)); } form.replaceChild(list,out); out = list; }; If there aren’t any results, we define a paragraph with the no results message as list. In any case we replace the old out (the loading message) with the list and re-define out as the list. return{ config:config, init:init, found:found }; }(); All that is left to do is return the properties and methods we want to make public. In this case found needs to be public as it is accessed by the API return. We return init to make it accessible and config to allow implementers to override any of the properties. Using the script In order to use this script, all you need to do is to add it after the form in the document, override the API key with your own and call init(): <form id="customsearch" action="http://search.yahoo.com/search"> <div> <label for="p">Search this site:</label> <input type="text" name="p" id="term"> <input type="hidden" name="vs" id="site" value="bbc.co.uk"> <input type="submit" value="go"> </div> </form> <script type="text/javascript" src="boss-site-search.js"></script> <script type="text/javascript"> SITESEARCH.config.appID = 'copy-the-id-you-know-to-get-where'; SITESEARCH.init(); </script> Where to go from here This is just a very simple example of what you can do with BOSS. You can define languages and regions, retrieve and display images and news and mix the results with other data sources before displaying them. One very cool feature is that by adding a view=keyterms parameter to the URL you can get the keywords of each of the results to drill deeper into the search. An example for this written in PHP is available on the YDN blog. For JavaScript solutions there is a handy wrapper called yboss available to help you go nuts. 2008 Christian Heilmann chrisheilmann 2008-12-04T00:00:00+00:00 https://24ways.org/2008/sitewide-search-on-a-shoestring/ code
139 Flickr Photos On Demand with getFlickr In case you don’t know it yet, Flickr is great. It is a lot of fun to upload, tag and caption photos and it is really handy to get a vast network of contacts through it. Using Flickr photos outside of it is a bit of a problem though. There is a Flickr API, and you can get almost every page as an RSS feed, but in general it is a bit tricky to use Flickr photos inside your blog posts or web sites. You might not want to get into the whole API game or use a server side proxy script as you cannot retrieve RSS with Ajax because of the cross-domain security settings. However, Flickr also provides an undocumented JSON output, that can be used to hack your own solutions in JavaScript without having to use a server side script. If you enter the URL http://flickr.com/photos/tags/panda you get to the flickr page with photos tagged “panda”. If you enter the URL http://api.flickr.com/services/feeds/photos_public.gne?tags=panda&format=rss_200 you get the same page as an RSS feed. If you enter the URL http://api.flickr.com/services/feeds/photos_public.gne?tags=panda&format=json you get a JavaScript function called jsonFlickrFeed with a parameter that contains the same data in JSON format You can use this to easily hack together your own output by just providing a function with the same name. I wanted to make it easier for you, which is why I created the helper getFlickr for you to download and use. getFlickr for Non-Scripters Simply include the javascript file getflickr.js and the style getflickr.css in the head of your document: <script type="text/javascript" src="getflickr.js"></script> <link rel="stylesheet" href="getflickr.css" type="text/css"> Once this is done you can add links to Flickr pages anywhere in your document, and when you give them the CSS class getflickrphotos they get turned into gallery links. When a visitor clicks these links they turn into loading messages and show a “popup” gallery with the connected photos once they were loaded. As the JSON returned is very small it won’t take long. You can close the gallery, or click any of the thumbnails to view a photo. Clicking the photo makes it disappear and go back to the thumbnails. Check out the example page and click the different gallery links to see the results. Notice that getFlickr works with Unobtrusive JavaScript as when scripting is disabled the links still get to the photos on Flickr. getFlickr for JavaScript Hackers If you want to use getFlickr with your own JavaScripts you can use its main method leech(): getFlickr.leech(sTag, sCallback); sTag the tag you are looking for sCallback an optional function to call when the data was retrieved. After you called the leech() method you have two strings to use: getFlickr.html[sTag] contains an HTML list (without the outer UL element) of all the images linked to the correct pages at flickr. The images are the medium size, you can easily change that by replacing _m.jpg with _s.jpg for thumbnails. getFlickr.tags[sTag] contains a string of all the other tags flickr users added with the tag you searched for(space separated) You can call getFlickr.leech() several times when the page has loaded to cache several result feeds before the page gets loaded. This’ll make the photos quicker for the end user to show up. If you want to offer a form for people to search for flickr photos and display them immediately you can use the following HTML: <form onsubmit="getFlickr.leech(document.getElementById('tag').value, 'populate');return false"> <label for="tag">Enter Tag</label> <input type="text" id="tag" name="tag" /> <input type="submit" value="energize" /> <h3>Tags:</h3><div id="tags"></div> <h3>Photos:</h3><ul id="photos"></ul> </form> All the JavaScript you’ll need (for a basic display) is this: function populate(){ var tag = document.getElementById('tag').value; document.getElementById('photos').innerHTML = getFlickr.html[tag].replace(/_m\.jpg/g,'_s.jpg'); document.getElementById('tags').innerHTML = getFlickr.tags[tag]; return false; } Easy as pie, enjoy! Check out the example page and try the form to see the results. 2006 Christian Heilmann chrisheilmann 2006-12-03T00:00:00+00:00 https://24ways.org/2006/flickr-photos-on-demand/ code
161 Keeping JavaScript Dependencies At Bay As we are writing more and more complex JavaScript applications we run into issues that have hitherto (god I love that word) not been an issue. The first decision we have to make is what to do when planning our app: one big massive JS file or a lot of smaller, specialised files separated by task. Personally, I tend to favour the latter, mainly because it allows you to work on components in parallel with other developers without lots of clashes in your version control. It also means that your application will be more lightweight as you only include components on demand. Starting with a global object This is why it is a good plan to start your app with one single object that also becomes the namespace for the whole application, say for example myAwesomeApp: var myAwesomeApp = {}; You can nest any necessary components into this one and also make sure that you check for dependencies like DOM support right up front. Adding the components The other thing to add to this main object is a components object, which defines all the components that are there and their file names. var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } } }; Technically you can also omit the loaded properties, but it is cleaner this way. The next thing to add is an addComponent function that can load your components on demand by adding new SCRIPT elements to the head of the documents when they are needed. var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } }, addComponent:function(component){ var c = this.components[component]; if(c && c.loaded === false){ var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src',c.url); document.getElementsByTagName('head')[0].appendChild(s); } } }; This allows you to add new components on the fly when they are not defined: if(!myAwesomeApp.components.gallery.loaded){ myAwesomeApp.addComponent('gallery'); }; Verifying that components have been loaded However, this is not safe as the file might not be available. To make the dynamic adding of components safer each of the components should have a callback at the end of them that notifies the main object that they indeed have been loaded: var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } }, addComponent:function(component){ var c = this.components[component]; if(c && c.loaded === false){ var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src',c.url); document.getElementsByTagName('head')[0].appendChild(s); } }, componentAvailable:function(component){ this.components[component].loaded = true; } } For example the gallery.js file should call this notification as a last line: myAwesomeApp.gallery = function(){ // [... other code ...] }(); myAwesomeApp.componentAvailable('gallery'); Telling the implementers when components are available The last thing to add (actually as a courtesy measure for debugging and implementers) is to offer a listener function that gets notified when the component has been loaded: var myAwesomeApp = { components :{ formcheck:{ url:'formcheck.js', loaded:false }, dynamicnav:{ url:'dynamicnav.js', loaded:false }, gallery:{ url:'gallery.js', loaded:false }, lightbox:{ url:'lightbox.js', loaded:false } }, addComponent:function(component){ var c = this.components[component]; if(c && c.loaded === false){ var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src',c.url); document.getElementsByTagName('head')[0].appendChild(s); } }, componentAvailable:function(component){ this.components[component].loaded = true; if(this.listener){ this.listener(component); }; } }; This allows you to write a main listener function that acts when certain components have been loaded, for example: myAwesomeApp.listener = function(component){ if(component === 'gallery'){ showGallery(); } }; Extending with other components As the main object is public, other developers can extend the components object with own components and use the listener function to load dependent components. Say you have a bespoke component with data and labels in extra files: myAwesomeApp.listener = function(component){ if(component === 'bespokecomponent'){ myAwesomeApp.addComponent('bespokelabels'); }; if(component === 'bespokelabels'){ myAwesomeApp.addComponent('bespokedata'); }; if(component === 'bespokedata'){ myAwesomeApp,bespokecomponent.init(); }; }; myAwesomeApp.components.bespokecomponent = { url:'bespoke.js', loaded:false }; myAwesomeApp.components.bespokelabels = { url:'bespokelabels.js', loaded:false }; myAwesomeApp.components.bespokedata = { url:'bespokedata.js', loaded:false }; myAwesomeApp.addComponent('bespokecomponent'); Following this practice you can write pretty complex apps and still have full control over what is available when. You can also extend this to allow for CSS files to be added on demand. Influences If you like this idea and wondered if someone already uses it, take a look at the Yahoo! User Interface library, and especially at the YAHOO_config option of the global YAHOO.js object. 2007 Christian Heilmann chrisheilmann 2007-12-18T00:00:00+00:00 https://24ways.org/2007/keeping-javascript-dependencies-at-bay/ code
186 The Web Is Your CMS It is amazing what you can do these days with the services offered on the web. Flickr stores terabytes of photos for us and converts them automatically to all kind of sizes, finds people in them and even allows us to edit them online. YouTube does almost the same complete job with videos, LinkedIn allows us to maintain our CV, Delicious our bookmarks and so on. We don’t have to do these tasks ourselves any more, as all of these systems also come with ways to use the data in the form of Application Programming Interfaces, or APIs for short. APIs give us raw data when we send requests telling the system what we want to get back. The problem is that every API has a different idea of what is a simple way of accessing this data and in which format to give it back. Making it easier to access APIs What we need is a way to abstract the pains of different data formats and authentication formats away from the developer — and this is the purpose of the Yahoo Query Language, or YQL for short. Libraries like jQuery and YUI make it easy and reliable to use JavaScript in browsers (yes, even IE6) and YQL allows us to access web services and even the data embedded in web documents in a simple fashion – SQL style. Select * from the web and filter it the way I want YQL is a web service that takes a few inputs itself: A query that tells it what to get, update or access An output format – XML, JSON, JSON-P or JSON-P-X A callback function (if you defined JSON-P or JSON-P-X) You can try it out yourself – check out this link to get back Flickr photos for the search term ‘santa’*%20from%20flickr.photos.search%20where%20text%3D%22santa%22&format=xml in XML format. The YQL query for this is select * from flickr.photos.search where text="santa" The easiest way to take your first steps with YQL is to look at the console. There you get sample queries, access to all the data sources available to you and you can easily put together complex queries. In this article, however, let’s use PHP to put together a web page that pulls in Flickr photos, blog posts, Videos from YouTube and latest bookmarks from Delicious. Check out the demo and get the source code on GitHub. <?php /* YouTube RSS */ $query = 'select description from rss(5) where url="http://gdata.youtube.com/feeds/base/users/chrisheilmann/uploads?alt=rss&v=2&orderby=published&client=ytapi-youtube-profile";'; /* Flickr search by user id */ $query .= 'select farm,id,owner,secret,server,title from flickr.photos.search where user_id="11414938@N00";'; /* Delicious RSS */ $query .= 'select title,link from rss where url="http://feeds.delicious.com/v2/rss/codepo8?count=10";'; /* Blog RSS */ $query .= 'select title,link from rss where url="http://feeds.feedburner.com/wait-till-i/gwZf"'; /* The YQL web service root with JSON as the output */ $root = 'http://query.yahooapis.com/v1/public/yql?format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys'; /* Assemble the query */ $query = "select * from query.multi where queries='".$query."'"; $url = $root . '&q=' . urlencode($query); /* Do the curl call (access the data just like a browser would) */ $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); $output = curl_exec($ch); curl_close($ch); $data = json_decode($output); $results = $data->query->results->results; /* YouTube output */ $youtube = '<ul id="youtube">'; foreach($results[0]->item as $r){ $cleanHTML = undoYouTubeMarkupCrimes($r->description); $youtube .= '<li>'.$cleanHTML.'</li>'; } $youtube .= '</ul>'; /* Flickr output */ $flickr = '<ul id="flickr">'; foreach($results[1]->photo as $r){ $flickr .= '<li>'. '<a href="http://www.flickr.com/photos/codepo8/'.$r->id.'/">'. '<img src="http://farm' .$r->farm . '.static.flickr.com/'. $r->server . '/' . $r->id . '_' . $r->secret . '_s.jpg" alt="'.$r->title.'"></a></li>'; } $flickr .= '</ul>'; /* Delicious output */ $delicious = '<ul id="delicious">'; foreach($results[2]->item as $r){ $delicious .= '<li><a href="'.$r->link.'">'.$r->title.'</a></li>'; } $delicious .= '</ul>'; /* Blog output */ $blog = '<ul id="blog">'; foreach($results[3]->item as $r){ $blog .= '<li><a href="'.$r->link.'">'.$r->title.'</a></li>'; } $blog .= '</ul>'; function undoYouTubeMarkupCrimes($str){ $cleaner = preg_replace('/555px/','100%',$str); $cleaner = preg_replace('/width="[^"]+"/','',$cleaner); $cleaner = preg_replace('/<tbody>/','<colgroup><col width="20%"><col width="50%"><col width="30%"></colgroup><tbody>',$cleaner); return $cleaner; } ?> What we are doing here is create a few different YQL statements and queue them together with the query.multi table. Each of these can be run inside YQL itself. Check out the YouTube, Flickr, Delicious and Blog example in the console if you don’t believe me. The benefit of using this table is that we don’t make individual requests for each query but we get all the data in one single request – which means a much better performing solution as the YQL server farm is faster on the web than our servers. We point the query to the YQL web service end point and get the resulting data using cURL. All that we need to do then is to convert the returned data to HTML lists that can be printed out inside an HTML template. Mixing, matching and using HTML as a data source This was a simple example of what YQL can do for you. Where it gets really powerful however is by mixing and matching different APIs. YQL is also a good tool to get information from HTML documents. By using the html table you can load the content of an HTML document (which gets fixed automatically by HTMLTidy) and use XPATH to filter down results to what you need. Take the following example which takes headlines from the news.bbc.co.uk homepage and runs the results through Yahoo’s Term Extractor API to give you a list of currently hot topics. select * from search.termextract where context in ( select content from html where url="http://news.bbc.co.uk" and xpath="//table[@width=800]//a" ) Try it out in the console or see the results here. In English, this means: Go to http://news.bbc.co.uk and get me the HTML Run it through HTML Tidy to clean it up. Get me only the links inside the table with an attribute of width and the value 800 Get only the content of the link and for each of the links Take the content and send it as context to the Yahoo Term Extractor API If we choose JSON-P as the output format we can use the outcome directly in JavaScript (see this demo or see its source): <ul id="hottopics"></ul> <script type="text/javascript"> function hottopics(o){ var res = o.query.results.Result, all = res.length, topics = {}, out = [], html = '', i=0; /* create hash from topics to prevent repetition */ for(i=0;i<all;i++){ topics[res[i]] = res[i]; }; for(i in topics){ out.push(i); }; html = '<li>' + out.join('</li><li>') + '</li>'; document.getElementById('hottopics').innerHTML = html; }; </script> <script type="text/javascript" src="http://query.yahooapis.com/v1/public/yql?q=select%20content%20from%20search.termextract%20where %20context%20in%20(select%20content%20from%20html%20where%20url%3D%22http%3A%2F%2Fnews.bbc.co.uk%22%20and%20xpath%3D%22%2F%2Ftable%5B%40width%3D800%5D%2F%2Fa%22)&format=json&callback=hottopics"></script> Using JSON, we can also use PHP which means the demo works for everybody – not only those with JavaScript enabled (see this demo or see its source): <ul id="hottopics"><li> <?php $url = 'http://query.yahooapis.com/v1/public/yql?q=select%20content'. '%20from%20search.termextract%20where%20context%20in'. '%20(select%20content%20from%20html%20where%20url%3D%22'. 'http%3A%2F%2Fnews.bbc.co.uk%22%20and%20xpath%3D%22%2F%2F'. 'table%5B%40width%3D800%5D%2F%2Fa%22)&format=json'; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); $output = curl_exec($ch); curl_close($ch); $data = json_decode($output); $topics = array_unique($data->query->results->Result); echo join('</li><li>',$topics); ?> </li></ul> Summary This article could only scratch the surface of YQL. You have not only read access to the web but you can also write to web services. For example you can update Twitter, post to your WordPress blog or shorten a URL with bit.ly. Using Open Tables you can add any web service to the YQL interface and you can even run server-side JavaScript which is for example useful to return Flickr photos as HTML or get the HTML content from a document that needs POST data. The web of data is already here, and using YQL you don’t have to be a web services expert to use it and be part of it. 2009 Christian Heilmann chrisheilmann 2009-12-17T00:00:00+00:00 https://24ways.org/2009/the-web-is-your-cms/ code
233 Wrapping Things Nicely with HTML5 Local Storage HTML5 is here to turn the web from a web of hacks into a web of applications – and we are well on the way to this goal. The coming year will be totally and utterly awesome if you are excited about web technologies. This year the HTML5 revolution started and there is no stopping it. For the first time all the browser vendors are rallying together to make a technology work. The new browser war is fought over implementation of the HTML5 standard and not over random additions. We live in exciting times. Starting with a bang As with every revolution there is a lot of noise with bangs and explosions, and that’s the stage we’re at right now. HTML5 showcases are often CSS3 showcases, web font playgrounds, or video and canvas examples. This is great, as it gets people excited and it gives the media something to show. There is much more to HTML5, though. Let’s take a look at one of the less sexy, but amazingly useful features of HTML5 (it was in the HTML5 specs, but grew at such an alarming rate that it warranted its own spec): storing information on the client-side. Why store data on the client-side? Storing information in people’s browsers affords us a few options that every application should have: You can retain the state of an application – when the user comes back after closing the browser, everything will be as she left it. That’s how ‘real’ applications work and this is how the web ones should, too. You can cache data – if something doesn’t change then there is no point in loading it over the Internet if local access is so much faster You can store user preferences – without needing to keep that data on your server at all. In the past, storing local data wasn’t much fun. The pain of hacky browser solutions In the past, all we had were cookies. I don’t mean the yummy things you get with your coffee, endorsed by the blue, furry junkie in Sesame Street, but the other, digital ones. Cookies suck – it isn’t fun to have an unencrypted HTTP overhead on every server request for storing four kilobytes of data in a cryptic format. It was OK for 1994, but really neither an easy nor a beautiful solution for the task of storing data on the client. Then came a plethora of solutions by different vendors – from Microsoft’s userdata to Flash’s LSO, and from Silverlight isolated storage to Google’s Gears. If you want to know just how many crazy and convoluted ways there are to store a bit of information, check out Samy’s evercookie. Clearly, we needed an easier and standardised way of storing local data. Keeping it simple – local storage And, lo and behold, we have one. The local storage API (or session storage, with the only difference being that session data is lost when the window is closed) is ridiculously easy to use. All you do is call a few methods on the window.localStorage object – or even just set the properties directly using the square bracket notation: if('localStorage' in window && window['localStorage'] !== null){ var store = window.localStorage; // valid, API way store.setItem(‘cow’,‘moo’); console.log( store.getItem(‘cow’) ); // => ‘moo’ // shorthand, breaks at keys with spaces store.sheep = ‘baa’ console.log( store.sheep ); // ‘baa’ // shorthand for all store[‘dog’] = ‘bark’ console.log( store[‘dog’] ); // => ‘bark’ } Browser support is actually pretty good: Chrome 4+; Firefox 3.5+; IE8+; Opera 10.5+; Safari 4+; plus iPhone 2.0+; and Android 2.0+. That should cover most of your needs. Of course, you should check for support first (or use a wrapper library like YUI Storage Utility or YUI Storage Lite). The data is stored on a per domain basis and you can store up to five megabytes of data in localStorage for each domain. Strings attached By default, localStorage only supports strings as storage formats. You can’t store results of JavaScript computations that are arrays or objects, and every number is stored as a string. This means that long, floating point numbers eat into the available memory much more quickly than if they were stored as numbers. var cowdesc = "the cow is of the bovine ilk, "+ "one end is for the moo, the "+ "other for the milk"; var cowdef = { ilk“bovine”, legs, udders, purposes front“moo”, end“milk” } }; window.localStorage.setItem(‘describecow’,cowdesc); console.log( window.localStorage.getItem(‘describecow’) ); // => the cow is of the bovine… window.localStorage.setItem(‘definecow’,cowdef); console.log( window.localStorage.getItem(‘definecow’) ); // => [object Object] = bad! This limits what you can store quite heavily, which is why it makes sense to use JSON to encode and decode the data you store: var cowdef = { "ilk":"bovine", "legs":4, "udders":4, "purposes":{ "front":"moo", "end":"milk" } }; window.localStorage.setItem(‘describecow’,JSON.stringify(cowdef)); console.log( JSON.parse( window.localStorage.getItem(‘describecow’) ) ); // => Object { ilk=“bovine”, more…} You can also come up with your own formatting solutions like CSV, or pipe | or tilde ~ separated formats, but JSON is very terse and has native browser support. Some use case examples The simplest use of localStorage is, of course, storing some data: the current state of a game; how far through a multi-form sign-up process a user is; and other things we traditionally stored in cookies. Using JSON, though, we can do cooler things. Speeding up web service use and avoiding exceeding the quota A lot of web services only allow you a certain amount of hits per hour or day, and can be very slow. By using localStorage with a time stamp, you can cache results of web services locally and only access them after a certain time to refresh the data. I used this technique in my An Event Apart 10K entry, World Info, to only load the massive dataset of all the world information once, and allow for much faster subsequent visits to the site. The following screencast shows the difference: For use with YQL (remember last year’s 24 ways entry?), I’ve built a small script called YQL localcache that wraps localStorage around the YQL data call. An example would be the following: yqlcache.get({ yql: 'select * from flickr.photos.search where text="santa"', id: 'myphotos', cacheage: ( 60*60*1000 ), callback: function(data) { console.log(data); } }); This loads photos of Santa from Flickr and stores them for an hour in the key myphotos of localStorage. If you call the function at various times, you receive an object back with the YQL results in a data property and a type property which defines where the data came from – live is live data, cached means it comes from cache, and freshcache indicates that it was called for the first time and a new cache was primed. The cache will work for an hour (60×60×1,000 milliseconds) and then be refreshed. So, instead of hitting the YQL endpoint over and over again, you hit it once per hour. Caching a full interface Another use case I found was to retain the state of a whole interface of an application by caching the innerHTML once it has been rendered. I use this in the Yahoo Firehose search interface, and you can get the full story about local storage and how it is used in this screencast: The stripped down code is incredibly simple (JavaScript with PHP embed): // test for localStorage support if(('localStorage' in window) && window['localStorage'] !== null){ var f = document.getElementById(‘mainform’); // test with PHP if the form was sent (the submit button has the name “sent”) // get the HTML of the form and cache it in the property “state” localStorage.setItem(‘state’,f.innerHTML); // if the form hasn’t been sent… // check if a state property exists and write back the HTML cache if(‘state’ in localStorage){ f.innerHTML = localStorage.getItem(‘state’); } } Other ideas In essence, you can use local storage every time you need to speed up access. For example, you could store image sprites in base-64 encoded datasets instead of loading them from a server. Or you could store CSS and JavaScript libraries on the client. Anything goes – have a play. Issues with local and session storage Of course, not all is rainbows and unicorns with the localStorage API. There are a few niggles that need ironing out. As with anything, this needs people to use the technology and raise issues. Here are some of the problems: Inadequate information about storage quota – if you try to add more content to an already full store, you get a QUOTA_EXCEEDED_ERR and that’s it. There’s a great explanation and test suite for localStorage quota available. Lack of automatically expiring storage – a feature that cookies came with. Pamela Fox has a solution (also available as a demo and source code) Lack of encrypted storage – right now, everything is stored in readable strings in the browser. Bigger, better, faster, more! As cool as the local and session storage APIs are, they are not quite ready for extensive adoption – the storage limits might get in your way, and if you really want to go to town with accessing, filtering and sorting data, real databases are what you’ll need. And, as we live in a world of client-side development, people are moving from heavy server-side databases like MySQL to NoSQL environments. On the web, there is also a lot of work going on, with Ian Hickson of Google proposing the Web SQL database, and Nikunj Mehta, Jonas Sicking (Mozilla), Eliot Graff (Microsoft) and Andrei Popescu (Google) taking the idea beyond simply replicating MySQL and instead offering Indexed DB as an even faster alternative. On the mobile front, a really important feature is to be able to store data to use when you are offline (mobile coverage and roaming data plans anybody?) and you can use the Offline Webapps API for that. As I mentioned at the beginning, we have a very exciting time ahead – let’s make this web work faster and more reliably by using what browsers offer us. For more on local storage, check out the chapter on Dive into HTML5. 2010 Christian Heilmann chrisheilmann 2010-12-06T00:00:00+00:00 https://24ways.org/2010/html5-local-storage/ code
284 Subliminal User Experience The term ‘user experience’ is often used vaguely to quantify common elements of the interaction design process: wireframing, sitemapping and so on. UX undoubtedly involves all of these principles to some degree, but there really is a lot more to it than that. Good UX is characterized by providing the user with constant feedback as they step through your interface. It means thinking about and providing fallbacks and error resolutions in even the rarest of scenarios. It’s about omitting clutter to make way for the necessary, and using the most fundamental of design tools to influence a user’s path. It means making no assumptions, designing right down to the most distinct details and going one step further every single time. In many cases, good UX is completely subliminal. There are simple tools and subtleties we can build into our products to enhance the overall experience but, in order to do so, we really have to step beyond where we usually draw the line on what to design. The purpose of this article is not to provide technical how-tos, as the functionality is, in most cases, quite simple and could be implemented in a myriad of ways. Rather, it will present a handful of ideas for enhancing the experience of an interface at a deeper level of design without relying on the container. We’ll cover three elements that should get you thinking in the right mindset: progress activity and post-active states pseudo-class preloading buttons and their (mis)behaviour Progress activity and the post-active state We’ve long established that we can’t control the devices our products are viewed on, which browser they’ll run in or what connection speed will be used to access them. We accept this all as factual, so why is it so often left to the browser to provide feedback to the user when an event is triggered or an error encountered? The browser isn’t part of the interface — it’s merely a container. A simple, visual recognition of your users’ activity may be all it takes to make or break the product. Let’s begin with a commonly overlooked case: progress activity. A user moves their cursor over a hyperlink or button, which is clearly defined as one by the visual language of your content. Upon doing so, they trigger the :hover state to confirm this element is indeed interactive. So far, so good. What happens next is where it starts to fall apart: the user hits this link, presumably triggering an :active state, which is then returned to the normal state upon release. And then what? From this point on, your user is in limbo. The link has fallen back to either its regular or :visited state. You’ve effectively abandoned them and are relying entirely on the browser they’re using to communicate that something is happening. This poses quite a few problems: The user may lose focus of what they were doing. There is little consistency between progress indication in browsers. The user may not even notice that their action has been acknowledged. How many times have one or more of these events happened to you due to a lack of communication from the interface? Think about the differences between Safari and Chrome in this area — two browsers that, when compared to each other, are relatively similar in nature, though this basic feature differs in execution. Like all aspects of designing the user experience, there is no one true way to fix this problem, but we can introduce details that many users will unconsciously appreciate. Consider the basic loading indicator. It’s nothing new — in fact, some would argue it’s quite a cliché. However, whether using a spinning wheel or a progress bar, a gif or JavaScript, or something more sophisticated, these simple tools create an illusion of movement, progress and activity. Depending on the implementation, progress indication graphics can significantly increase a user’s perception of the speed in which an event is taking place. Combine this with a cursor change and a lock over the element to prevent double-clicking or reloading, and your chances of keeping your user’s valuable attention have significantly increased. Demo: Progress activity and the post-active state This same logic applies to all aspects of defaulting in a browser, from micro-elements like this up to something as simple as a 404 page. The difference in a user’s reaction to hitting the default Apache 404 and a hand-crafted, branded page are phenomenal and there are no prizes for guessing which one they’re more likely to exit from. Pseudo-class preloading Another detail that it pays well to look after is the use and abuse of the :hover element and, more importantly, the content revealed by it. Chances are you’re using the :hover pseudo-class somewhere in almost every screen you create. If content is being revealed on :hover and that content takes some time to load, there will inevitably be a delay the first time it is initiated. It appears tacky and half-finished when a tooltip or drop-down loads instantly, only to have its background or supporting elements follow through a second or two later. So, let’s preload the elements we know we’ll need. A very simple application of this would be to load each file into the default state of a visible element and offset them by a large number. This ensures our elements have loaded and are ready if and when they need to be displayed. element { background: url(path/to/image.jpg) -9999em -9999em no-repeat; } element .tooltip { display: none; } element:hover .tooltip { display: block; background: url(path/to/image.jpg) 0 0; } Background images are just one example. Of course, the same logic can apply to any form of revealed content. Using a sprite graphic can also be a clever — albeit tedious — method for achieving the same goal, so if you’re using a sprite, preloading in this way may not be necessary The differences between preloading and not can only be visualized properly with an actual demonstration. Demo: Preloading revealed content Buttons and their (mis)behaviour Almost all of the time, a button serves just one purpose: to be clicked (or tapped). When a button’s pressed, therefore, if anything other than triggering the desired event occurs, a user naturally becomes frustrated. I often get funny looks when talking about this, but designing the details of a button is something I consider essential. It goes without saying that a button should always visually recognise :hover and :active states. We can take that one step further and disable some actions that get in the way of pressing the button. It’s rare that a user would ever want to select and use the text on a button, so let’s cleanly disable it: element { -moz-user-select: -moz-none; -webkit-user-select: none; user-select: none; } If the button is image-based or contains an image, we could also disable user dragging to make sure the image element stays locked to the button: element { -moz-user-drag: -moz-none; -webkit-user-drag: none; user-drag: none; } Demo: A more usable button Disabling global features like this should be done with utmost caution as it’s very easy to cross the line between enhancement and friction. Cases where this is acceptable are very rare, but it’s a good trick to keep in mind nevertheless. Both Apple’s iCloud and Metalab’s Flow applications use these tools appropriately and to great extent. You could argue that the visual feedback of having the text selected or image dragged when a user mis-hits the button is actually a positive effect, informing the user that their desired action did not work. However, covering for human error should be a designer’s job, not that of our users. We can (almost) ensure it does work for them by accommodating for errors like this in most cases. Final thoughts Designing to this level of detail can seem obsessive, but as a designer and user of many interfaces and applications, I believe it can be the difference between a good user experience and a great one. The samples you’ve just seen are only a fraction of the detail we can design for. Keep in mind that the demonstrations, code and methods above outline just one way to do this. You may not agree with all of these processes or have the time and desire to consider them, but one fact remains: it’s not the technology, or the way it’s done that’s important — it’s the logic and the concept of designing everything. 2011 Chris Sealey chrissealey 2011-12-03T00:00:00+00:00 https://24ways.org/2011/subliminal-user-experience/ ux
5 Managing a Mind On 21 May 2013, I woke in a hospital bed feeling exhausted, disorientated and ashamed. The day before, I had tried to kill myself. It’s very hard to write about this and share it. It feels like I’m opening up the deepest recesses of my soul and laying everything bare, but I think it’s important we share this as a community. Since starting tentatively to write about my experience, I’ve had many conversations about this: sharing with others; others sharing with me. I’ve been surprised to discover how many people are suffering similarly, thinking that they’re alone. They’re not. Due to an insane schedule of teaching, writing, speaking, designing and just generally trying to keep up, I reached a point where my buffers completely overflowed. I was working so hard on so many things that I was struggling to maintain control. I was living life on fast-forward and my grasp on everything was slowly slipping. On that day, I reached a low point – the lowest point of my life – and in that moment I could see only one way out. I surrendered. I can’t really describe that moment. I’m still grappling with it. All I know is that I couldn’t take it any more and I gave up. I very nearly died. I’m very fortunate to have survived. I was admitted to hospital, taken there unconscious in an ambulance. On waking, I felt overwhelmed with shame and overcome with remorse, but I was resolved to grasp the situation and address it. The experience has forced me to confront a great deal of issues in my life; it has also encouraged me to seek a deeper understanding of my situation and, in particular, the mechanics of the mind. The relentless pace of change We work in a fast-paced industry: few others, if any, confront the daily challenges we face. The landscape we work within is characterised by constant flux. It’s changing and evolving at a rate we have never experienced before. Few industries reinvent themselves yearly, monthly, weekly… Ours is one of these industries. Technology accelerates at an alarming rate and keeping abreast of this change is challenging, to say the least. As designers it can be difficult to maintain a knowledge bank that is relevant and fit for purpose. We’re on a constant rollercoaster of endless learning, trying to maintain the pace as, daily, new ideas and innovations emerge — in some cases fundamentally changing our medium. Under the pressure of client work or product design and development, it can be difficult to find the time to focus on learning the new skills we need to remain relevant and functionally competent. The result, all too often, is that the edges of our days have eroded. We no longer work nine to five; instead we work eight to six, and after the working day is over we regroup to spend our evenings learning. It’s an unsustainable situation. From the workshop to the web Added to this pressure to keep up, our work is now undertaken under a global gaze, conducted under an ever-present spotlight. Tools like Dribbble, Twitter and others, while incredibly powerful, have an unfortunate side effect, that of unfolding your ideas in public. This shift, from workshop to web, brings with it additional pressure. In the past, the early stages of creativity took place within the relative safety of the workshop, an environment where one could take risks and gather feedback from a trusted few. We had space to make and space to break. No more. Our industry’s focus (and society’s focus) on sharing, leads us now to play out our decisions in public. This shift has changed us culturally, slowly but surely easing every aspect of our process – and lives – from private to public. This is at once liberating and debilitating. If you’re not careful, an addiction to followers, likes, retweets, page views and other forms of measurement can overwhelm you. When you release your work into the wild and all it’s greeted with is silence, it can cripple you. Reflecting on this, in an insightful article titled Derailed, Rogie King asks, “Can social popularity take us off the course of growth and where we were intended to go?” He makes a powerful point, that perhaps we might focus on what really matters, setting aside statistics. He concludes that to grow as practitioners we might be best served by seeking out critique through other avenues, away from the social spotlight. On status anxiety and impostor syndrome Following my experience I embarked on a period of self-reflection. I wanted to discover what had driven me to take the course of action I had. I wanted to ensure it never happened again. I wanted to understand how the mind works and, in so doing, learn a little more about myself. I’ve only begun this journey, but two things I discovered resonated with me: the twin pressures of status anxiety and impostor syndrome. In his excellent book Status Anxiety, the philosopher Alain de Botton explores a growing concern with status anxiety, a worry about how others perceive us and how this shapes our relationship with the world. He states: We all worry about what others think of us. We all long to succeed and fear failure. We all suffer – to a greater or lesser degree, usually privately and with embarrassment – from status anxiety. […] This is an almost universal anxiety that rarely gets mentioned directly: an anxiety about what others think of us; about whether we’re judged a success or a failure, a winner or a loser. We see these pressures played out and amplified in the social sphere we all inhabit. We are social animals and we cannot help but react to the landscape we live and work within. Even if your work receives the public praise you so secretly desire, you find yourself questioning this praise. A psychological phenomenon in which sufferers are unable to internalise their accomplishments, impostor syndrome is far more widespread than you’d imagine. The author Leigh Buchanan describes it as “A fear that one is not as smart or capable as others think.” As she puts it, “People who feel like frauds chalk up their accomplishments to external factors such as luck and timing, or worry they are coasting on charm and personality rather than on talent.” At the bottom, this was all I could see. I felt overwhelmed by others’ perception of me. Was I a success or a failure? Would I be discovered as the fraud I’d convinced myself that I was? These twin pressures – that I was unconscious of at the time – had lead me to a place of crippling self-doubt, questioning my very existence. The act of discovery, of investigating how the mind functions, led me to a deeper understanding of myself. Developing an awareness of psychology and learning about conditions like status anxiety and impostor syndrome helped me to understand and recognise how my mind worked, enabling me to manage it more effectively. Make it count Reflecting upon my experience, I began to regroup, to focus on what really mattered. I’d taken on too much — as I believe many of us do. I was guilty of wanting to do all the things. I started to introduce pauses. Before blindly saying yes to everything, I forced myself to pause and ask: “Is this important?” Our community offers us huge benefits, but an always-on culture in which we’re bombarded daily by opportunity places temptation in our paths. It’s easy to get sucked in to a vortex of wanting to be a part of everything. It’s important, however, to focus. As Simon Collison puts it: I cull and surrender topics. Then I focus on my strengths, mastering my core skills. We only have so much time and we can only do so much. It’s impossible, indeed futile, to try to do everything. Sometimes we need to step back a little and just enjoy life, enjoy others’ achievements, without feeling the need to be actively involved ourselves. As Mahatma Ghandi put it: A ‘no’ uttered from deepest conviction is better and greater than a ‘yes’ merely uttered to please, or what is worse, to avoid trouble. Young India, volume 9, 1927 We need to learn to say no a little more often. We need to focus on the work that matters. This, coupled with an understanding of the mind and how it works, can help us achieve a happier balance between work and life. Don’t waste your time. You only have one life. Make it count. 2013 Christopher Murphy christophermurphy 2013-12-21T00:00:00+00:00 https://24ways.org/2013/managing-a-mind/ process
259 Designing Your Future I’ve had the pleasure of working for a variety of clients – both large and small – over the last 25 years. In addition to my work as a design consultant, I’ve worked as an educator, leading the Interaction Design team at Belfast School of Art, for the last 15 years. In July, 2018 – frustrated with formal education, not least the ever-present hand of ‘austerity’ that has ravaged universities in the UK for almost a decade – I formally reduced my teaching commitment, moving from a full-time role to a half-time role. Making the move from a (healthy!) monthly salary towards a position as a freelance consultant is not without its challenges: one month your salary’s arriving in your bank account (and promptly disappearing to pay all of your bills); the next month, that salary’s been drastically reduced. That can be a shock to the system. In this article, I’ll explore the challenges encountered when taking a life-changing leap of faith. To help you confront ‘the fear’ – the nervousness, the sleepless nights and the ever-present worry about paying the bills – I’ll provide a set of tools that will enable you to take a leap of faith and pursue what deep down drives you. In short: I’ll bare my soul and share everything I’m currently working on to – once and for all – make a final bid for freedom. This isn’t easy. I’m sharing my innermost hopes and aspirations, and I might open myself up to ridicule, but I believe that by doing so, I might help others, by providing them with tools to help them make their own leap of faith. The power of visualisation As designers we have skills that we use day in, day out to imagine future possibilities, which we then give form. In our day-to-day work, we use those abilities to design products and services, but I also believe we can use those skills to design something every bit as important: ourselves. In this article I’ll explore three tools that you can use to design your future: Product DNA Artefacts From the Future Tomorrow Clients Each of these tools is designed to help you visualise your future. By giving that future form, and providing a concrete goal to aim for, you put the pieces in place to make that future a reality. Brian Eno – the noted musician, producer and thinker – states, “Humans are capable of a unique trick: creating realities by first imagining them, by experiencing them in their minds.” Eno helpfully provides a powerful example: When Martin Luther King said, “I have a dream,” he was inviting others to dream that dream with him. Once a dream becomes shared in that way, current reality gets measured against it and then modified towards it. The dream becomes an invisible force which pulls us forward. By this process it starts to come true. The act of imagining something makes it real. When you imagine your future – designing an alternate, imagined reality in your mind – you begin the process of making that future real. Product DNA The first tool, which I use regularly – for myself and for client work – is a tool called Product DNA. The intention of this tool is to identify beacons from which you can learn, helping you to visualise your future. We all have heroes – individuals or organisations – that we look up to. Ask yourself, “Who are your heroes?” If you had to pick three, who would they be and what could you learn from them? (You probably have more than three, but distilling down to three is an exercise in itself.) Earlier this year, when I was putting the pieces in place for a change in career direction, I started with my heroes. I chose three individuals that inspired me: Alan Moore: the author of ‘Do Design: Why Beauty is Key to Everything’; Mark Shayler: the founder of Ape, a strategic consultancy; and Seth Godin: a writer and educator I’ve admired and followed for many years. Looking at each of these individuals, I ‘borrowed’ a little DNA from each of them. That DNA helped me to paint a picture of the kind of work I wanted to do and the direction I wanted to travel. Moore’s book - ‘Do Design’ – had a powerful influence on me, but the primary inspiration I drew from him was the sense of gravitas he conveyed in his work. Moore’s mission is an important one and he conveys that with an appropriate weight of expression. Shayler’s work appealed to me for its focus on equipping big businesses with a startup mindset. As he puts it: “I believe that you can do the things that you do better.” That sense – of helping others to be their best selves – appealed to me. Finally, the words Godin uses to describe himself – “An Author, Entrepreneur and Most of All, a Teacher” – resonated with me. The way he positions himself, as, “most of all, a teacher,” gave me the belief I needed that I could work as an educator, but beyond the ivory tower of academia. I’ve been exploring each of these individuals in depth, learning from them and applying what I learn to my practice. They don’t all know it, but they are all ‘mentors from afar’. In a moment of serendipity – and largely, I believe, because I’d used this tool to explore his work – I was recently invited by Alan Moore to help him develop a leadership programme built around his book. The key lesson here is that not only has this exercise helped me to design my future and give it tangible form, it’s also led to a fantastic opportunity to work with Alan Moore, a thinker who I respect greatly. Artefacts From the Future The second tool, which I also use regularly, is a tool called ‘Artefacts From the Future’. These artefacts – especially when designed as ‘finished’ pieces – are useful for creating provocations to help you see the future more clearly. ‘Artefacts From the Future’ can take many forms: they might be imagined magazine articles, news items, or other manifestations of success. By imagining these end points and giving them form, you clarify your goals, establishing something concrete to aim for. Earlier this year I revisited this tool to create a provocation for myself. I’d just finished Alla Kholmatova’s excellent book on ‘Design Systems’, which I would recommend highly. The book wasn’t just filled with valuable insights, it was also beautifully designed. Once I’d finished reading Kholmatova’s book, I started thinking: “Perhaps it’s time for me to write a new book?” Using the magic of ‘Inspect Element’, I created a fictitious page for a new book I wanted to write: ‘Designing Delightful Experiences’. I wrote a description for the book, considering how I’d pitch it. This imagined page was just what I needed to paint a picture in my mind of a possible new book. I contacted the team at Smashing Magazine and pitched the idea to them. I’m happy to say that I’m now working on that book, which is due to be published in 2019. Without this fictional promotional page from the future, the book would have remained as an idea – loosely defined – rolling around my mind. By spending some time, turning that idea into something ‘real’, I had everything I needed to tell the story of the book, sharing it with the publishing team at Smashing Magazine. Of course, they could have politely informed me that they weren’t interested, but I’d have lost nothing – truly – in the process. As designers, creating these imaginary ‘Artefacts From the Future’ is firmly within our grasp. All we need to do is let go a little and allow our imaginations to wander. In my experience, working with clients and – to a lesser extent, students – it’s the ‘letting go’ part that’s the hard part. It can be difficult to let down your guard and share a weighty goal, but I’d encourage you to do so. At the end of the day, you have nothing to lose. The key lesson here is that your ‘Artefacts From the Future’ will focus your mind. They’ll transform your unformed ideas into ‘tangible evidence’ of future possibilities, which you can use as discussion points and provocations, helping you to shape your future reality. Tomorrow Clients The third tool, which I developed more recently, is a tool called ‘Tomorrow Clients’. This tool is designed to help you identify a list of clients that you aspire to work with. The goal is to pinpoint who you would like to work with – in an ideal world – and define how you’d position yourself to win them over. Again, this involves ‘letting go’ and allowing your mind to imagine the possibilities, asking, “What if…?” Before I embarked upon the design of my new website, I put together a ‘soul searching’ document that acted as a focal point for my thinking. I contacted a number of designers for a second opinion to see if my thinking was sound. One of my graduates – Chris Armstrong, the founder of Niice – replied with the following: “Might it be useful to consider five to ten companies you’d love to work for, and consider how you’d pitch yourself to them?” This was just the provocation I needed. To add a little focus, I reduced the list to three, asking: “Who would my top three clients be?” By distilling the list down I focused on who I’d like to work for and how I’d position myself to entice them to work with me. My list included: IDEO, Adobe and IBM. All are companies I admire and I believed each would be interesting to work for. This exercise might – on the surface – appear a little like indulging in fantasy, but I believe it helps you to clarify exactly what it is you are good at and, just as importantly, put that in to words. For each company, I wrote a short pitch outlining why I admired them and what I thought I could add to their already existing skillset. Focusing first on Adobe, I suggested establishing an emphasis on educational resources, designed to help those using Adobe’s creative tools to get the most out of them. A few weeks ago, I signed a contract with the team working on Adobe XD to create a series of ‘capsule courses’, focused on UX design. The first of these courses – exploring UI design – will be out in 2019. I believe that Armstrong’s provocation – asking me to shift my focus from clients I have worked for in the past to clients I aspire to work for in the future – made all the difference. The key lesson here is that this exercise encouraged me to raise the bar and look to the future, not the past. In short, it enabled me to proactively design my future. In closing… I hope these three tools will prove a welcome addition to your toolset. I use them when working with clients, I also use them when working with myself. I passionately believe that you can design your future. I also firmly believe that you’re more likely to make that future a reality if you put some thought into defining what it looks like. As I say to my students and the clients I work with: It’s not enough to want to be a success, the word ‘success’ is too vague to be a destination. A far better approach is to define exactly what success looks like. The secret is to visualise your future in as much detail as possible. With that future vision in hand as a map, you give yourself something tangible to translate into a reality. 2018 Christopher Murphy christophermurphy 2018-12-15T00:00:00+00:00 https://24ways.org/2018/designing-your-future/ process
301 Stretching Time Time is valuable. It’s a precious commodity that, if we’re not too careful, can slip effortlessly through our fingers. When we think about the resources at our disposal we’re often guilty of forgetting the most valuable resource we have to hand: time. We are all given an allocation of time from the time bank. 86,400 seconds a day to be precise, not a second more, not a second less. It doesn’t matter if we’re rich or we’re poor, no one can buy more time (and no one can save it). We are all, in this regard, equals. We all have the same opportunity to spend our time and use it to maximum effect. As such, we need to use our time wisely. I believe we can ‘stretch’ time, ensuring we make the most of every second and maximising the opportunities that time affords us. Through a combination of ‘Structured Procrastination’ and ‘Focused Finishing’ we can open our eyes to all of the opportunities in the world around us, whilst ensuring that we deliver our best work precisely when it’s required. A win win, I’m sure you’ll agree. Structured Procrastination I’m a terrible procrastinator. I used to think that was a curse – “Why didn’t I just get started earlier?” – over time, however, I’ve started to see procrastination as a valuable tool if it is used in a structured manner. Don Norman refers to procrastination as ‘late binding’ (a term I’ve happily hijacked). As he argues, in Why Procrastination Is Good, late binding (delay, or procrastination) offers many benefits: Delaying decisions until the time for action is beneficial… it provides the maximum amount of time to think, plan, and determine alternatives. We live in a world that is constantly changing and evolving, as such the best time to execute is often ‘just in time’. By delaying decisions until the last possible moment we can arrive at solutions that address the current reality more effectively, resulting in better outcomes. Procrastination isn’t just useful from a project management perspective, however. It can also be useful for allowing your mind the space to wander, make new discoveries and find creative connections. By embracing structured procrastination we can ‘prime the brain’. As James Webb Young argues, in A Technique for Producing Ideas, all ideas are made of other ideas and the more we fill our minds with other stimuli, the greater the number of creative opportunities we can uncover and bring to life. By late binding, and availing of a lack of time pressure, you allow the mind space to breathe, enabling you to uncover elements that are important to the problem you’re working on and, perhaps, discover other elements that will serve you well in future tasks. When setting forth upon the process of writing this article I consciously set aside time to explore. I allowed myself the opportunity to read, taking in new material, safe in the knowledge that what I discovered – if not useful for this article – would serve me well in the future. Ron Burgundy summarises this neatly: Procrastinator? No. I just wait until the last second to do my work because I will be older, therefore wiser. An ‘older, therefore wiser’ mind is a good thing. We’re incredibly fortunate to live in a world where we have a wealth of information at our fingertips. Don’t waste the opportunity to learn, rather embrace that opportunity. Make the most of every second to fill your mind with new material, the rewards will be ample. Deadlines are deadlines, however, and deadlines offer us the opportunity to focus our minds, bringing together the pieces of the puzzle we found during our structured procrastination. Like everyone I’ll hear a tiny, but insistent voice in my head that starts to rise when the deadline is approaching. The older you get, the closer to the deadline that voice starts to chirp up. At this point we need to focus. Focused Finishing We live in an age of constant distraction. Smartphones are both a blessing and a curse, they keep us connected, but if we’re not careful the constant connection they provide can interrupt our flow. When a deadline is accelerating towards us it’s important to set aside the distractions and carve out a space where we can work in a clear and focused manner. When it’s time to finish, it’s important to avoid context switching and focus. All those micro-interactions throughout the day – triaging your emails, checking social media and browsing the web – can get in the way of you hitting your deadline. At this point, they’re distractions. Chunking tasks and managing when they’re scheduled can improve your productivity by a surprising order of magnitude. At this point it’s important to remove distractions which result in ‘attention residue’, where your mind is unable to focus on the current task, due to the mental residue of other, unrelated tasks. By focusing on a single task in a focused manner, it’s possible to minimise the negative impact of attention residue, allowing you to maximise your performance on the task at hand. Cal Newport explores this in his excellent book, Deep Work, which I would highly recommend reading. As he puts it: Efforts to deepen your focus will struggle if you don’t simultaneously wean your mind from a dependence on distraction. To help you focus on finishing it’s helpful to set up a work-focused environment that is purposefully free from distractions. There’s a time and a place for structured procrastination, but – equally – there’s a time and a place for focused finishing. The French term ‘mise en place’ is drawn from the world of fine cuisine – I discovered it when I was procrastinating – and it’s applicable in this context. The term translates as ‘putting in place’ or ‘everything in its place’ and it refers to the process of getting the workplace ready before cooking. Just like a professional chef organises their utensils and arranges their ingredients, so too can you. Thanks to the magic of multiple users on computers, it’s possible to create a separate user on your computer – without access to email and other social tools – so that you can switch to that account when you need to focus and hit the deadline. Another, less technical way of achieving the same result – depending, of course, upon your line of work – is to close your computer and find some non-digital, unconnected space to work in. The goal is to carve out time to focus so you can finish. As Newport states: If you don’t produce, you won’t thrive – no matter how skilled or talented you are. Procrastination is fine, but only if it’s accompanied by finishing. Create the space to finish and you’ll enjoy the best of both worlds. In closing… There is a time and a place for everything: there is a time to procrastinate, and a time to focus. To truly reap the rewards of time, the mind needs both. By combining the processes of ‘Structured Procrastination’ and ‘Focused Finishing’ we can make the most of our 86,400 seconds a day, ensuring we are constantly primed to make new discoveries, but just as importantly, ensuring we hit the all-important deadlines. Make the most of your time, you only get so much. Use every second productively and you’ll be thankful that you did. Don’t waste your time, once it’s gone, it’s gone… and you can never get it back. 2016 Christopher Murphy christophermurphy 2016-12-21T00:00:00+00:00 https://24ways.org/2016/stretching-time/ process
133 Gravity-Defying Page Corners While working on Stikkit, a “page curl” came to be. Not being as crafty as Veerle, you see. I fired up Photoshop to see what could be. “Another copy is running on the network“ … oopsie. With license issues sorted out and a concept in mind I set out to create something flexible and refined. One background image and code that is sure to be lean. A simple solution for lazy people like me. The curl I’ll be showing isn’t a curl at all. It’s simply a gradient that’s 18 pixels tall. With a fade to the left that’s diagonally aligned and a small fade on the right that keeps the illusion defined. Create a selection with the marquee tool (keeping in mind a reasonable minimum width) and drag a gradient (black to transparent) from top to bottom. Now drag a gradient (the background color of the page to transparent) from the bottom left corner to the top right corner. Finally, drag another gradient from the right edge towards the center, about 20 pixels or so. But the top is flat and can be positioned precisely just under the bottom right edge very nicely. And there it will sit, never ever to be busted by varying sizes of text when adjusted. <div id="page"> <div id="page-contents"> <h2>Gravity-Defying!</h2> <p>Lorem ipsum dolor ...</p> </div> </div> Let’s dive into code and in the markup you’ll see “is that an extra div?” … please don’t kill me? The #page div sets the width and bottom padding whose height is equal to the shadow we’re adding. The #page-contents div will set padding in ems to scale with the text size the user intends. The background color will be added here too but not overlapping the shadow where #page’s padding makes room. A simple technique that you may find amusing is to substitute a PNG for the GIF I was using. For that would be crafty and future-proof, too. The page curl could sit on any background hue. I hope you’ve enjoyed this easy little trick. It’s hardly earth-shattering, and arguably slick. But it could come in handy, you just never know. Happy Holidays! And pleasant dreams of web three point oh. 2006 Dan Cederholm dancederholm 2006-12-24T00:00:00+00:00 https://24ways.org/2006/gravity-defying-page-corners/ design
74 Should We Be Reactive? Evolution Looking at the evolution of the web and the devices we use should help remind us that the times we’re adjusting to are just another step on a journey. These times seem to be telling us that we need to embrace flexibility. Imagine an HTML file containing nothing but text. It’s viewable on any web-capable device and reasonably readable: the notion of the universality of the web was very much a founding principle. Right from the beginning, browser vendors understood that we’d want text to reflow (why wouldn’t we?), so I consider the first websites to have been fluid. As we attempted to exert more control through our designs in the early days of the web, debates about whether we should produce fixed or fluid sites raged. We could create fluid designs using tables, but what we didn’t have then was a wide range of web capable devices or the ability to control this fluidity. The biggest changes occurred when stats showed enough people using a different screen resolution we could cater for. To me, the techniques of responsive web design provide the control we were missing. Combining new approaches to layout and images with media queries empowered us to learn how to embrace the inherent flexibility of the web in ways to suit our work and the devices used by our audience. Perhaps another kind of flexibility might be found in how we use context to affect how we present our content; to consider how we might use the information we can access from people, browsers and devices to provide web experiences – effectively creating sites that react to initial or changing circumstances in the relationship between people and our content. Embracing flexibility So what is context? Put simply, you could think of it as a secondary piece of information that helps clarify the meaning of the first. It helps set a scene or describe circumstances. I think that Cennydd Bowles has summed it up really well through talks he’s given recently, in which he’s arrived at the acronym DETAILS (Device, Environment, Time, Activity, Individual, Location, Social) – I encourage you to keep an eye out for his next book due in the new year where he’ll explore this idea much further. This clarity over what context could mean in terms of what we do on the web is fundamental, directing us towards ways we might use it. When you stop to think about it, we’ve been using some basic pieces of this information right from the beginning, like bits of JavaScript or Java applets that serve an appropriate greeting to your site’s visitors, or show their location, or even local weather. But what if we think of this from the beginning of our projects? We should think about our content first. Once we know this and have a direction, perhaps then we can think about what context, or even multiple contexts, might help us to communicate more effectively. The real world There’s always been a disconnect between the real world and the web, which is to be expected. But the world around us is a sea of data; every fundamental building block: people, places, events and things have information waiting to be explored. For sites based around physical objects or locations, this divide is really apparent. We don’t ordinarily take the time to describe in code the properties of a place, or consider whether your relationship to the place in the real world should have any impact on your relationship with a site about it. When I think about local businesses, they have such rich properties to draw on and yet we don’t really explore them in any meaningful way, even through something as simple as opening hours. Now we have data… We’ve long had access to the current time both on server- and client-sides. The use of geolocation is easier than ever, but when we look at the range of information we could glean to help us make some choices, maybe there’s some help on the horizon from projects like the W3C Device APIs Working Group. This might prove useful to help make us aware of network and battery conditions of a device, along with the potential to gain data from other sensors, which could tell us about lighting conditions, ambient noise levels and temperature depending on the capabilities of the device. It may be that our sites have some form of login or access to your profile from another site. Along with data from our devices and browsers, this should give us a sense of how best to talk to our audience in certain situations. We don’t necessarily need to know any personal details, just enough to make decisions about how to present our sites. The reactive web? So why reactive web design? I’m hoping that a name might help us to have a common vocabulary not only about what we mean when we talk about context, but how it could be considered through our projects, right from the early stages. How could this manifest itself? A simple example might be a location-aware panel on your site. Perhaps the space is a little down in your content hierarchy but serves a perfectly valid purpose by default. To visitors outside the country perhaps this works fine, but within your country maybe this panel could be used to communicate more effectively. Further still, if we knew the visitors were in the vicinity, we could talk to them more directly. What if both time and location were relevant? This space could work as before but you could consider how time could intersect with your local audience. If you know they’re local and it’s a certain time of day, you could communicate directly with them. This example isn’t beyond what banner ads often do and uses easily accessible information. There are more unusual combinations we may be able to find, such as movement and presence. Perhaps a site that tells a story, which changes design and content based on whether you’re moving, how long you’ve been on the site and how far you’ve travelled. This isn’t what we typically expect from websites, but we should bear in mind that what websites are now will not be what they become in the future. You could do much of this contextual presentation through native apps, of course. The Silent History, an app novel written and designed for iPad and iPhone, uses an exploration element, providing “hundreds of location-based stories across the U.S. and around the world. These can be read only when your device’s GPS matches the coordinates of the specified location.” But considering the universality of the web, we could redefine what web-based experiences should be like. Not all methods would work well on the web, but that’s a decision that has to be made for a specific project. By thinking more broadly about any web-capable device, we can use what we know to provide relevant experiences for our site’s visitors. We need to be sure what we mean by relevant, of course! Reality bites While there are incredible possibilities, from a simple panel on a site to something bordering on living sites that evolve and change with our circumstances, we need to act with a degree of pragmatism and understand how much of what we could do is based on assumptions and the bias of our own experiences. We could go wild with changing the way our content is presented based on contextual information, but if we’re not careful what we end up with confuses and could provide a very fractured experience. As much as possible we need to think more ethnographically, observe and question people in the situations we think may be relevant, and test our assumptions as early as we can. Even on small projects, there may be ways we can validate our assumptions and test with our audience. The key to applying contextual content or cues is not to break the experience between contextual views (as I think we now wouldn’t when hiding content on a mobile view). It’s another instance of progressive enhancement – as we know certain pieces of information, we can enhance the experience. Also, if you do change content, how can you not make a more cumbersome experience for your visitors? It’s all about communication Content is at the core of what we do, but if we consider context we need to understand the impact on that. The effect could be as subtle as an altered hierarchy, involve swapping out panels of content, or in extreme instances perhaps all of your content might change. In some ways, this extends the notion of adaptive content that Karen McGrane has been talking about, to how we write and store the content we create. Thinking about the the impact of context may require us to re-evaluate our site structure, too. Whatever we decide, we have to be clear what will happen and manage the expectations of our users. The bottom line What I’m proposing isn’t that we go crazy and end up with a confused, disjointed set of experiences across the web. What I hope is that starting right from the beginning of a project, we think about what context is and could be, and see what relevance it might have to what we’re trying to communicate. This strategic process leads us to think about design. We are slowly adapting to what it means to be flexible through responsive and adaptive processes. What does thinking about contextual states mean to us (or designing for state in general)? Does this highlight again how difficult it’ll be for our tools to keep up with our processes and output? In terms of code, the vast majority of this data comes from the client-side through JavaScript. While we can progressively enhance, this could lead to a lot of code bloat through feature or capability detection, and potentially a lot of conditional loading of scripts. It’s a real shame we don’t get much we can rely on from the server-side – we know how unreliable user agents are! We need to understand why we’d do this. Are we trying to communicate well and be useful, or doing it to show off? Underneath it all, what do we base our decisions on? Do we have actual insight or are we proceeding from our assumptions and the bias of our own experiences? Scott Jenson summed it up best for me: (to paraphrase) the pain we put people through has to be greatly outweighed by the value we offer. I see that this could be another potential step in our evolution on the web; continuing this exploration of the flexibility the web allows us. It’s amazing we can do such incredible things from what is essentially a set of disparate, linked documents. 2012 Dan Donald dandonald 2012-12-09T00:00:00+00:00 https://24ways.org/2012/should-we-be-reactive/ design
174 Type-Inspired Interfaces One of the things that terrifies me most about a new project is the starting point. How is the content laid out? What colors do I pick? Once things like that are decided, it becomes significantly easier to continue design, but it’s the blank page where I spend the most time. To that end, I often start by choosing type. I don’t need to worry about colors or layout or anything else… just the right typefaces that support the art direction. (This article won’t focus on how to choose a typeface, but there are some really great resources if you interested in that sort of thing.) And just like that, all your work is done. “Hold it just a second,” you might say. “All I’ve done is pick type. I still have to do the rest!” To which I would reply, “Silly rabbit. You already have!” You see, picking the right typeface gets you farther than you might think. Here are a few tips on taking cues from type to design interfaces and interface elements. Perfecting Web 2.0 If you’re going for that beloved rounded corner look, you might class it up a bit by choosing the wonderful Omnes Pro by Joshua Darden. As the typeface already has a rounded aesthetic, making buttons that fit the style should be pretty easy. I’ve found that using multiples helps to keep your interfaces looking balanced and proportional. Noticing that the top left edge of the letter “P” has about an 12px corner radius, let’s choose a 24px radius for our button (a multiple of 2), so that we get proper rounded corners. By taking mathematical measurements from the typeface, our button looks more thought out than just “place arbitrary text on arbitrarily-sized button.” Pretty easy, eh? What’s in a name(plate)? Rounded buttons are pretty popular buttons nowadays, so let’s try something a bit more stylized. Have a gander at Brothers, a sturdy face from Emigre. The chiseled edges give us a perfect cue for a stylized button. Using the same slope, you can make plated-looking buttons that fit a different kind of style. Headlining You might even take some cues from the style of the typeface itself. Didone serifs are known for their lack of bracketsーthat is, a gradual transition from the stem to the serif. Instead, they typically connect at a right angle. Another common characteristic is the high contrast in the strokes: very thick stems, very thin serifs. So, when using a high contrast typeface, you can use it to your advantage to enhance hierarchy. Following our “multiples” guideline, a 12px measurement from the stems helps us create a top rule with a height of 24px (a multiple of 2). We can take the exact 1px measurement from the serif—a multiple of 1—to create the bottom rule. Voilà! I use this technique a lot. Swashbucklers And don’t forget the importance of visual “speed bumps” to break up long passages of text. A beautiful face like Alejandro Paul’s Ministry Script has over a thousand characters that can be manipulated or even combined to create elegant interface elements. Altering the partial differential character (∂) creates a delightful ornament that can help to guide the eye through content. Stagger & Swagger What about layout? How can we use typography to inform how our content is displayed? Let’s take a typeface like Assembler. We might use this for a design that needs to feel uneasy or uncomfortable. In design terms, that might translate into using irregular shapes and asymmetry. Using the proportional distances and degrees from the perpendiculars, we could easily create a multi-column layout that jives with the general tone. And for all you skeptics that don’t think a layout like this is doable on the web, stranger things have happened. Background texture generously offered by Bittbox. Overall Design Direction Finally, your typography could impact the entire look of the site, from the navigation to the interaction and everything in between. Check out how the (now-defunct) Nike Free site’s typography echoes the product itself, and in turn influences the navigation. Find Your Type With thousands of fonts to choose from, the possibilities are ridiculously open. From angles to radii to color to weight, you’ve got endless fodder before you. Great type designers spent countless hours slaving over these detailed letterforms; take advantage of it! Don’t feel like you have to limit yourself to the same old Helvetica and wet floors… unless your design calls for it. Happy hunting! 2009 Dan Mall danmall 2009-12-07T00:00:00+00:00 https://24ways.org/2009/type-inspired-interfaces/ design
235 Real Animation Using JavaScript, CSS3, and HTML5 Video When I was in school to be a 3-D animator, I read a book called Timing for Animation. Though only 152 pages long, it’s essentially the bible for anyone looking to be a great animator. In fact, Pixar chief creative officer John Lasseter used the first edition as a reference when he was an animator at Walt Disney Studios in the early 1980s. In the book, authors John Halas and Harold Whitaker advise: Timing is the part of animation which gives meaning to movement. Movement can easily be achieved by drawing the same thing in two different positions and inserting a number of other drawings between the two. The result on the screen will be movement; but it will not be animation. But that’s exactly what we’re doing with CSS3 and JavaScript: we’re moving elements, not animating them. We’re constantly specifying beginning and end states and allowing the technology to interpolate between the two. And yet, it’s the nuances within those middle frames that create the sense of life we’re looking for. As bandwidth increases and browser rendering grows more consistent, we can create interactions in different ways than we’ve been able to before. We’re encountering motion more and more on sites we’d generally label ‘static.’ However, this motion is mostly just movement, not animation. It’s the manipulation of an element’s properties, most commonly width, height, x- and y-coordinates, and opacity. So how do we create real animation? The metaphor In my experience, animation is most believable when it simulates, exaggerates, or defies the real world. A bowling ball falls differently than a racquetball. They each have different weights and sizes, which affect the way they land, bounce, and impact other objects. This is a major reason that JavaScript animation frequently feels mechanical; it doesn’t complete a metaphor. Expanding and collapsing a <div> feels very different than a opening a door or unfolding a piece of paper, but it often shouldn’t. The interaction itself should tie directly to the art direction of a page. Physics Understanding the physics of a situation is key to creating convincing animation, even if your animation seeks to defy conventional physics. Isaac Newton’s first law of motion’s_laws_of_motion states, “Every body remains in a state of rest or uniform motion (constant velocity) unless it is acted upon by an external unbalanced force.” Once a force acts upon an object, the object’s shape can change accordingly, depending on the strength of the force and the mass of the object. Another nugget of wisdom from Halas and Whitaker: All objects in nature have their own weight, construction, and degree of flexibility, and therefore each behaves in its own individual way when a force acts upon it. This behavior, a combination of position and timing, is the basis of animation. The basic question which an animator is continually asking himself is this: “What will happen to this object when a force acts upon it?” And the success of his animation largely depends on how well he answers this question. In animating with CSS3 and JavaScript, keep physics in mind. How ‘heavy’ is the element you’re interacting with? What kind of force created the action? A gentle nudge? A forceful shove? These subtleties will add a sense of realism to your animations and make them much more believable to your users. Misdirection Magicians often use misdirection to get their audience to focus on one thing rather than another. They fool us into thinking something happened that actually didn’t. Animation is the same, especially on a screen. By changing the arrangement of pixels on screen at a fast enough rate, your eyes fool your mind into thinking an object is actually in motion. Another important component of misdirecting in animation is the use of multiple objects. Try to recall a cartoon where a character vanishes. More often, the character makes some sort of exaggerated motion (this is called anticipation) then disappears, and a puff a smoke follows. That smoke is an extra element, but it goes a long way into make you believe that character actually disappeared. Very rarely does a vanishing character’s opacity simply go from one hundred per cent to zero. That’s not believable. So why do we do it with <div>s? Armed with the ammunition of metaphors and misdirection, let’s code an example. Shake, rattle, and roll (These demos require at least a basic understanding of jQuery and CSS3. Run away if your’re afraid, or brush up on CSS animation and resources for learning jQuery. Also, these demos use WebKit-specific features and are best viewed in the latest version of Safari, so performance in other browsers may vary.) We often see the design pattern of clicking a link to reveal content. Our “first demo”:”/examples/2010/real-animation/demo1/ shows us exactly that. It uses jQuery’s “ slideDown()”:http://api.jquery.com/slideDown/ method, as many instances do. But what force acted on the <div> that caused it to open? Did pressing the button unlatch some imaginary hook? Did it activate an unlocking sequence with some gears? Take 2 Our second demo is more explicit about what happens: the button fell on the <div> and shook its content loose. Here’s how it’s done. function clickHandler(){ $('#button').addClass('animate'); return false; } Clicking the link adds a class of animate to our button. That class has the following CSS associated with it: <style> .animate { -webkit-animation-name: ANIMATE; -webkit-animation-duration: 0.25s; -webkit-animation-iteration-count: 1; -webkit-animation-timing-function: ease-in; } @-webkit-keyframes ANIMATE { from { top: 72px; } to { top: 112px; } } </style> In our keyframe definition, we’ve specified from and to states. This is great, because we can be explicit about how an object starts and finishes moving. What’s also extra handy is that these CSS keyframes broadcast events that you can react to with JavaScript. In this example, we’re listening to the webkitAnimationEnd event and opening the <div> only when the sequence is complete. Here’s that code. function attachAnimationEventHandlers(){ var wrap = document.getElementById('wrap'); wrap.addEventListener('webkitAnimationEnd', function($e) { switch($e.animationName){ case "ANIMATE" : openMain(); break; default: } }, false); } function openMain(){ $('#main .inner').slideDown('slow'); } (For more info on handling animation events, check out the documentation at the Safari Reference Library.) Take 3 The problem with the previous demo is that the subtleties of timing aren’t evident. It still feels a bit choppy. For our third demo, we’ll use percentages instead of keywords so that we can insert as many points as we need to communicate more realistic timing. The percentages allow us to add the keys to well-timed animation: anticipation, hold, release, and reaction. <style> @-webkit-keyframes ANIMATE { 0% { top: 72px; } 40% { /* anticipation */ top: 57px; } 70% { /* hold */ top: 56px; } 80% { /* release */ top: 112px; } 100% { /* return */ top: 72px; } } </style> Take 4 The button animation is starting to feel much better, but the reaction of the <div> opening seems a bit slow. This fourth demo uses jQuery’s delay() method to time the opening precisely when we want it. Since we know the button’s animation is one second long and its reaction starts at eighty per cent of that, that puts our delay at 800ms (eighty per cent of one second). However, here’s a little pro tip: let’s start the opening at 750ms instead. The extra fifty milliseconds makes it feel more like the opening is a reaction to the exact hit of the button. Instead of listening for the webkitAnimationEnd event, we can start the opening as soon as the button is clicked, and the movement plays on the specified delay. function clickHandler(){ $('#button').addClass('animate'); openMain(); return false; } function openMain(){ $('#main .inner').delay(750).slideDown('slow'); } Take 5 We can tweak the timing of that previous animation forever, but that’s probably as close as we’re going to get to realistic animation with CSS and JavaScript. However, for some extra sauce, we could relegate the whole animation in our final demo to a video sequence which includes more nuances and extra elements for misdirection. Here’s the basis of video replacement. Add a <video> element to the page and adjust its opacity to zero. Once the button is clicked, fade the button out and start playing the video. Once the video is finished playing, fade it out and bring the button back. function clickHandler(){ if($('#main .inner').is(':hidden')){ $('#button').fadeTo(100, 0); $('#clickVideo').fadeTo(100, 1, function(){ var clickVideo = document.getElementById('clickVideo'); clickVideo.play(); setTimeout(removeVideo, 2400); openMain(); }); } return false; } function removeVideo(){ $('#button').fadeTo(500, 1); $('#clickVideo').fadeOut('slow'); } function openMain(){ $('#main .inner').delay(1100).slideDown('slow'); } Wrapping up I’m no JavaScript expert by any stretch. I’m sure a lot of you scripting wizards out there could write much cleaner and more efficient code, but I hope this gives you an idea of the theory behind more realistic motion with the technology we’re using most. This is just one model of creating more convincing animation, but you can create countless variations of this, including… Exporting <video> animations in 3-D animation tools or 2-D animation tools like Flash or After Effects Using <canvas> or SVG instead of <video> Employing specific JavaScript animation frameworks Making use of all the powerful properties of CSS Transforms and CSS Animation Trying out emerging CSS3 animation tools like Sencha Animator If it wasn’t already apparent, these demos show an exaggerated example and probably aren’t practical in a lot of environments. However, there are a handful of great sites out there that honor animation techniques—metaphor, physics, and misdirection, among others—like Benjamin De Cock’s vCard, 20 Things I Learned About Browsers and the Web by Fantasy Interactive, and the Nike Snowboarding site by Ian Coyle and HEGA. They’re wonderful testaments to what you can do to aid interaction for users. My goal was to show you the ‘why’ and the ‘how.’ Your charge is to discern the ‘where’ and the ‘when.’ Happy animating! 2010 Dan Mall danmall 2010-12-15T00:00:00+00:00 https://24ways.org/2010/real-animation-using-javascript-css3-and-html5-video/ code
98 Absolute Columns CSS layouts have come quite a long way since the dark ages of web publishing, with all sorts of creative applications of floats, negative margins, and even background images employed in order to give us that most basic building block, the column. As the title implies, we are indeed going to be discussing columns today—more to the point, a handy little application of absolute positioning that may be exactly what you’ve been looking for… Care for a nightcap? If you’ve been developing for the web for long enough, you may be familiar with this little children’s fable, passed down from wizened Shaolin monks sitting atop the great Mt. Geocities: “Once upon a time, multiple columns of the same height could be easily created using TABLES.” Now, though we’re all comfortably seated on the standards train (and let’s be honest: even if you like to think you’ve fallen off, if you’ve given up using tables for layout, rest assured your sleeper car is still reserved), this particular—and as page layout goes, quite basic—trick is still a thorn in our CSSides compared to the ease of achieving the same effect using said Tables of Evil™. See, the orange juice masks the flavor… Creative solutions such as Dan Cederholm’s Faux Columns do a good job of making it appear as though adjacent columns maintain equal height as content expands, using a background image to fill the space that the columns cannot. Now, the Holy Grail of CSS columns behaving exactly how they would as table cells—or more to the point, as columns—still eludes us (cough CSS3 Multi-column layout module cough), but sometimes you just need, for example, a secondary column (say, a sidebar) to match the height of a primary column, without involving the creation of images. This is where a little absolute positioning can save you time, while possibly giving your layout a little more flexibility. Shaken, not stirred You’re probably familiar by now with the concept of Making the Absolute, Relative as set forth long ago by Doug Bowman, but let’s quickly review just in case: an element set to position:absolute will position itself relative to its nearest ancestor set to position:relative, rather than the browser window (see Figure 1). Figure 1. However, what you may not know is that we can anchor more than two sides of an absolutely positioned element. Yes, that’s right, all four sides (top, right, bottom, left) can be set, though in this example we’re only going to require the services of three sides (see Figure 2 for the end result). Figure 2. Trust me, this will make you feel better Our requirements are essentially the same as the standard “absolute-relative” trick—a container <div> set to position:relative, and our sidebar <div> set to position:absolute — plus another <div> that will serve as our main content column. We’ll also add a few other common layout elements (wrapper, header, and footer) so our example markup looks more like a real layout and less like a test case: <div id="wrapper"> <div id="header"> <h2>#header</h2> </div> <div id="container"> <div id="column-left"> <h2>#left</h2> <p>Lorem ipsum dolor sit amet…</p> </div> <div id="column-right"> <h2>#right</h2> </div> </div> <div id="footer"> <h2>#footer</h2> </div> </div> In this example, our main column (#column-left) is only being given a width to fit within the context of the layout, and is otherwise untouched (though we’re using pixels here, this trick will of course work with fluid layouts as well), and our right keeping our styles nice and minimal: #container { position: relative; } #column-left { width: 480px; } #column-right { position: absolute; top: 10px; right: 10px; bottom: 10px; width: 250px; } The trick is a simple one: the #container <div> will expand vertically to fit the content within #column-left. By telling our sidebar <div> (#column-right) to attach itself not only to the top and right edges of #container, but also to the bottom, it too will expand and contract to match the height of the left column (duplicate the “lorem ipsum” paragraph a few times to see it in action). Figure 3. On the rocks “But wait!” I hear you exclaim, “when the right column has more content than the left column, it doesn’t expand! My text runneth over!” Sure enough, that’s exactly what happens, and what’s more, it’s supposed to: Absolutely positioned elements do exactly what you tell them to do, and unfortunately aren’t very good at thinking outside the box (get it? sigh…). However, this needn’t get your spirits down, because there’s an easy way to address the issue: by adding overflow:auto to #column-right, a scrollbar will automatically appear if and when needed: #column-right { position: absolute; top: 10px; right: 10px; bottom: 10px; width: 250px; overflow: auto; } While this may limit the trick’s usefulness to situations where the primary column will almost always have more content than the secondary column—or where the secondary column’s content can scroll with wild abandon—a little prior planning will make it easy to incorporate into your designs. Driving us to drink It just wouldn’t be right to have a friendly, festive holiday tutorial without inviting IE6, though in this particular instance there will be no shaming that old browser into admitting it has a problem, nor an intervention and subsequent 12-step program. That’s right my friends, this tutorial has abstained from IE6-abuse now for 30 days, thanks to the wizard Dean Edwards and his amazingly talented IE7 Javascript library. Simply drop the Conditional Comment and <script> element into the <head> of your document, along with one tiny CSS hack that only IE6 (and below) will ever see, and that browser will be back on the straight and narrow: <!--[if lt IE 7]> <script src="http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE7.js" type="text/javascript"></script> <style type="text/css" media="screen"> #container { zoom:1; /* helps fix IE6 by initiating hasLayout */ } </style> <![endif]--> Eggnog is supposed to be spiked, right? Of course, this is one simple example of what can be a much more powerful technique, depending on your needs and creativity. Just don’t go coding up your wildest fantasies until you’ve had a chance to sleep off the Christmas turkey and whatever tasty liquids you happen to imbibe along the way… 2008 Dan Rubin danrubin 2008-12-22T00:00:00+00:00 https://24ways.org/2008/absolute-columns/ code
253 Clip Paths Know No Bounds CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance. The basics of a clip path Before we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed. Using the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely’s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element’s dimensions. See the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen. So for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines. clip-path: polygon( 30% 0%, 70% 0%, 100% 30%, 100% 70%, 70% 100%, 30% 100%, 0% 70%, 0% 30% ); A shape with less vertices than the eye can see It’s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box — or more specifically when we think outside the range of 0% - 100%. Our element’s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element. See the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen. By going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons. Our earlier octagon can similarly be made with only four points. See the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen. Multiple shapes, one clip path We can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon(). See the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen. Depending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element’s box, we can draw extra lines to help us get where we need to go next as needed. It can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles. See the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen. Different shapes with fill rules A polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification — the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape. See the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen. As lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path’s lines, the point is considered outside, and if it crosses an odd number the point is inside. Order of operations It is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more. These compositing effects are applied in the order: CSS Filters (e.g. filter: blur(2px)) Clipping (e.g. what this article is about) Masking (Clipping’s cousin) Blend Modes (e.g. mix-blend-mode: multiply) Opacity This means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element’s box edges. See the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen. If we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally. Revealing content with animation CSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values. See the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen. Don’t keep CSS Shapes in a box Clip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful. 2018 Dan Wilson danwilson 2018-12-20T00:00:00+00:00 https://24ways.org/2018/clip-paths-know-no-bounds/ code
41 What Is Vagrant and Why Should I Care? If you run a web server, a database server and your scripting language(s) of choice on your main machine and you have not yet switched to using virtualisation in your workflow then this essay may be of some value to you. I know you exist because I bump into you daily: freelancers coming in to work on our projects; internet friends complaining about reinstalling a development environment because of an operating system upgrade; fellow agency owners who struggle to brief external help when getting a particular project up and running; or even hardcore back-end developers who “don’t do ops” and prefer to run their development stack of choice locally. There are many perfectly reasonable arguments as to why you may not have already made the switch, from being simply too busy, all the way through to a distrust of the new. I’ll admit that there are many new technologies or workflows that I hear of daily and instantly disregard because I have tool overload, that feeling I get when I hear about a new shiny thing and think “Well, what I do now works – I’ll leave it for others to play with.” If that’s you when it comes to Vagrant then I hope you’ll hear me out. The business case is compelling enough for you to make that switch; as a bonus it’s also really easy to get going. In this article we’ll start off by going through the high level, the tools available and how it all fits together. Then we’ll touch on the justification for making the switch, providing a few use cases that might resonate with you. Finally, I’ll provide a very simple example that you can follow to get yourself up and running. What? You already know what virtualisation is. You use the ability to run an operating system within another operating system every day. Whether that’s Parallels or VMware on your laptop or similar server-based tools that drive the ‘cloud’, squeezing lots of machines on to physical hardware and making it really easy to copy servers and even clusters of servers from one place to another. It’s an amazing technology which has changed the face of the internet over the past fifteen years. Simply put, Vagrant makes it really easy to work with virtual machines. According to the Vagrant docs: If you’re a designer, Vagrant will automatically set everything up that is required for that web app in order for you to focus on doing what you do best: design. Once a developer configures Vagrant, you don’t need to worry about how to get that app running ever again. No more bothering other developers to help you fix your environment so you can test designs. Just check out the code, vagrant up, and start designing. While I’m not sure I agree with the implication that all designers would get others to do the configuring, I think you’ll agree that the “Just check out the code… and start designing” premise is very compelling. You don’t need Vagrant to develop your web applications on virtual machines. All you need is a virtualisation software package, something like VMware Workstation or VirtualBox, and some code. Download the half-gigabyte operating system image that you want and install it. Then download and configure the stack you’ll be working with: let’s say Apache, MySQL, PHP. Then install some libraries, CuRL and ImageMagick maybe, and finally configure the ability to easily copy files from your machine to the new virtual one, something like Samba, or install an FTP server. Once this is all done, copy the code over, import the database, configure Apache’s virtual host, restart and cross your fingers. If you’re a bit weird like me then the above is pretty easy to do and secretly quite fun. Indeed, the amount of traffic to one of my more popular blog posts proves that a lot of people have been building themselves development servers from scratch for some time (or at least trying to anyway), whether that’s on virtual or physical hardware. Or you could use Vagrant. It allows you, or someone else, to specify in plain text how the machine’s virtual hardware should be configured and what should be installed on it. It also makes it insanely easy to get the code on the server. You check out your project, type vagrant up and start work. Why? It’s worth labouring the point that Vagrant makes it really easy; I mean look-no-tangle-of-wires-or-using-vim-and-loads-of-annoying-command-line-stuff easy to run a development environment. That’s all well and good, I hear you say, but there’s a steep learning curve, an overhead to switch. You’re busy and this all sounds great but you need to get on; you’ve got a career to build or a business to run and you don’t have time to learn new stuff right now. In short, what’s the business case? The business case involves saved time, a very low barrier to entry and the ability to give the exact same environment to somebody else. Getting your first development virtual machine running will take minutes, not counting download time. Seriously, use pre-built Vagrant files and provisioners (we’ll touch on this below) and you can start developing immediately. Once you’ve finished developing you can check in your changes, ask a colleague or freelancer to check them out, and then they run the code on the exact same machine – even if they are on the other side of the world and regardless of whether they are on Windows, Linux or Apple OS X. The configuration to build the machine isn’t a huge binary disk image that’ll take ages to download from Git; it’s two small text files that can be version controlled too, so you can see any changes made to the config and roll back if needed. No more ‘It works for me’ reports; no ‘Oh, I was using PHP 5.3.3, not PHP 5.3.11’ – you’re both working on exact same copies of the development environment. With a tested and verified provisioning file you’ll have the confidence that when you brief your next freelancer in to your team there won’t be that painful to and fro of getting the system up and running, where you’re on a Skype call and they are uttering the immortal words, ‘It still doesn’t work’. You know it works because you can run it too. This portability becomes even more important when you’re working on larger sites and systems. Need a load balancer? Multiple front-end servers and a clustered database back-end? No problem. Add each server into the same Vagrant file and a single command will build all of them. As you’ll know if you work on larger, business critical systems, keeping the operating systems in sync is a real problem: one server with a slightly different library causing sporadic and hard to trace issues is a genuine time black hole. Well, the good news is that you can use the same provisioning files to keep test and production machines in sync using your current build workflow. Let’s also not forget the most simple use case: a single developer with multiple websites running on a single machine. If that’s you and you switch to using Vagrant-managed virtual machines then the next time you upgrade your operating system or do a fresh install there’s no chance that things will all stop working. The server config is all tucked away in version control with your code. Just pull it down and carry on coding. OK, got it. Show me already If you want to try this out you’ll need to install the latest VirtualBox and Vagrant for your platform. If you already have VMware Workstation or another supported virtualisation package installed you can use that instead but you may need to tweak my Vagrant file below. Depending on your operating system, a reboot might also be wise. Note: the commands below were executed on my MacBook, but should also work on Windows and Linux. If you’re using Windows make sure to run the command prompt as Administrator or it’ll fall over when trying to update the hosts file. As a quick sanity check let’s just make sure that we have the vagrant command in our path, so fire up a terminal and check the version number: $ vagrant -v Vagrant 1.6.5 We’ve one final thing to install and that’s the vagrant-hostsupdater plugin. Once again, in your terminal: $ vagrant plugin install vagrant-hostsupdater Installing the 'vagrant-hostsupdater' plugin. This can take a few minutes... Installed the plugin 'vagrant-hostsupdater (0.0.11)'! Hopefully that wasn’t too painful for you. There are two things that you need to manage a virtual machine with Vagrant: a Vagrant file: this tells Vagrant what hardware to spin up a provisioning file: this tells Vagrant what to do on the machine To save you copying and pasting I’ve supplied you with a simple example (ZIP) containing both of these. Unzip it somewhere sensible and in your terminal make sure you are inside the Vagrant folder: $ cd where/you/placed/it/24ways $ ls -l -rw-r--r--@ 1 bealers staff 11055 9 Nov 09:16 bealers-24ways.md -rw-r--r--@ 1 bealers staff 118152 9 Nov 10:08 it-works.png drwxr-xr-x 5 bealers staff 170 8 Nov 22:54 vagrant $ cd vagrant/ $ ls -l -rw-r--r--@ 1 bealers staff 1661 8 Nov 21:50 Vagrantfile -rwxr-xr-x@ 1 bealers staff 3841 9 Nov 08:00 provision.sh The Vagrant file tells Vagrant how to configure the virtual hardware of your development machine. Skipping over some of the finer details, here’s what’s in that Vagrant file: www.vm.box = "ubuntu/trusty64" Use Ubuntu 14.04 for the VM’s OS. Vagrant will only download this once. If another project uses the same OS, Vagrant will use a cached version. www.vm.hostname = "bealers-24ways.dev" Set the machine’s hostname. If, like us, you’re using the vagrant-hostsupdater plugin, this will also get added to your hosts file, pointing to the virtual machine’s IP address. www.vm.provider :virtualbox do |vb| vb.customize ["modifyvm", :id, "--cpus", "2" ] end Here’s an example of configuring the virtual machine’s hardware on the fly. In this case we want two virtual processors. Note: this is specific for the VirtualBox provider, but you could also have a section for VMware or other supported virtualisation software. www.vm.network "private_network", ip: "192.168.13.37" This specifies that we want a private networking link between your computer and the virtual machine. It’s probably best to use a reserved private subnet like 192.168.0.0/16 or 10.0.0.0/8 www.vm.synced_folder "../", "/var/www/24ways", owner: "www-data", group: "www-data" A particularly handy bit of Vagrant magic. This maps your local 24ways parent folder to /var/www/24ways on the virtual machine. This means the virtual machine already has direct access to your code and so do you. There’s no messy copying or synchronisation – just edit your files and immediately run them on the server. www.vm.provision :shell, :path => "provision.sh" This is where we specify the provisioner, the script that will be executed on the machine. If you open up the provisioner you’ll see it’s a bash script that does things like: install Apache, PHP, MySQL and related libraries configure the libraries: set permissions, enable logging create a database and grant some access rights set up some code for us to develop on; in this case, fire up a vanilla WordPress installation To get this all up and running you simply need to run Vagrant from within the vagrant folder: $ vagrant up You should now get a Matrix-like stream of stuff shooting up the screen. If this is the first time Vagrant has used this particular operating system image – remember we’ve specified the latest version of Ubuntu – it’ll download the disc image and cache it for future reuse. Then all the packages are downloaded and installed and finally all our configuration steps occur incluing the download and configuration of WordPress. Halfway through proceedings it’s likely that the process will halt at a prompt something like this: ==> www: adding to (/etc/hosts) : 192.168.13.37 bealers-24ways.dev # VAGRANT: 2dbfbced1b1e79d2a0942728a0a57ece (www) / 899bd80d-4251-4f6f-91a0-d30f2d9918cc Password: You need to enter your password to give vagrant sudo rights to add the IP address and hostname mapping to your local hosts file. Once finished, fire up your browser and go to http://bealers-24ways.dev. You should see a default WordPress installation. The username for wp-admin is admin and the password is 24ways. If you take a look at your local filesystem the 24ways folder should now look like: $ cd ../ $ ls -l -rw-r--r--@ 1 bealers staff 13074 9 Nov 10:14 bealers-24ways.md drwxr-xr-x 21 bealers staff 714 9 Nov 10:06 code drwxr-xr-x 3 bealers staff 102 9 Nov 10:06 etc -rw-r--r--@ 1 bealers staff 118152 9 Nov 10:08 it-works.png drwxr-xr-x 5 bealers staff 170 9 Nov 10:03 vagrant -rwxr-xr-x 1 bealers staff 1315849 9 Nov 10:06 wp-cli $ cd vagrant/ $ ls -l -rw-r--r--@ 1 bealers staff 1661 9 Nov 09:41 Vagrantfile -rwxr-xr-x@ 1 bealers staff 3836 9 Nov 10:06 provision.sh The code folder contains all the WordPress files. You can edit these directly and refresh that page to see your changes instantly. Staying in the vagrant folder, we’ll now SSH to the machine and have a quick poke around. $ vagrant ssh Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-39-generic x86_64) * Documentation: https://help.ubuntu.com/ System information as of Sun Nov 9 10:03:38 UTC 2014 System load: 1.35 Processes: 102 Usage of /: 2.7% of 39.34GB Users logged in: 0 Memory usage: 16% IP address for eth0: 10.0.2.15 Swap usage: 0% Graph this data and manage this system at: https://landscape.canonical.com/ Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 0 packages can be updated. 0 updates are security updates. vagrant@bealers-24ways:~$ You’re now logged in as the Vagrant user; if you want to become root this is easy: vagrant@bealers-24ways:~$ sudo su - root@bealers-24ways:~# Or you could become the webserver user, which is a good idea if you’re editing the web files directly on the server: root@bealers-24ways:~# su - www-data www-data@bealers-24ways:~$ www-data’s home directory is /var/www so we should be able to see our magically mapped files: www-data@bealers-24ways:~$ ls -l total 4 drwxr-xr-x 1 www-data www-data 306 Nov 9 10:09 24ways drwxr-xr-x 2 root root 4096 Nov 9 10:05 html www-data@bealers-24ways:~$ cd 24ways/ www-data@bealers-24ways:~/24ways$ ls -l total 1420 -rw-r--r-- 1 www-data www-data 13682 Nov 9 10:19 bealers-24ways.md drwxr-xr-x 1 www-data www-data 714 Nov 9 10:06 code drwxr-xr-x 1 www-data www-data 102 Nov 9 10:06 etc -rw-r--r-- 1 www-data www-data 118152 Nov 9 10:08 it-works.png drwxr-xr-x 1 www-data www-data 170 Nov 9 10:03 vagrant -rwxr-xr-x 1 www-data www-data 1315849 Nov 9 10:06 wp-cli We can also see some of our bespoke configurations: www-data@bealers-24ways:~/24ways$ cat /etc/php5/mods-available/siftware.ini upload_max_filesize = 15M log_errors = On display_errors = On display_startup_errors = On error_log = /var/log/apache2/php.log memory_limit = 1024M date.timezone = Europe/London www-data@bealers-24ways:~/24ways$ ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 43 Nov 9 10:06 bealers-24ways.dev.conf -> /var/www/24ways/etc/bealers-24ways.dev.conf If you want to leave the server, simply type Ctrl+D a few times and you’ll be back where you started. www-data@bealers-24ways:~/24ways$ logout root@bealers-24ways:~# logout vagrant@bealers-24ways:~$ logout Connection to 127.0.0.1 closed. $ You can now halt the machine: $ vagrant halt ==> www: Attempting graceful shutdown of VM... ==> www: Removing hosts Bonus level The example I’ve provided isn’t very realistic. In the real world I’d expect the Vagrant file and provisioner to be included with the project and for it not to create the directory structure, which should already exist in your project. The same goes for the Apache VirtualHost file. You’ll also probably have a default SQL script to populate the database. As you work with Vagrant you might start to find bash provisioning to be quite limiting, especially if you are working on larger projects which use more than one server. In that case I would suggest you take a look at Ansible, Puppet or Chef. We use Ansible because we like YAML but they all do the same sort of thing. The main benefit is being able to use the same Vagrant provisioning scripts to also provision test, staging and production environments using your build workflows. Having to supply a password so the hosts file can be updated gets annoying very quicky so you can give Vagrant sudo rights: $ sudo visudo Add these lines to the bottom (Shift+G then i then Ctrl+V then Esc then :wq) Cmnd_Alias VAGRANT_HOSTS_ADD = /bin/sh -c echo "*" >> /etc/hosts Cmnd_Alias VAGRANT_HOSTS_REMOVE = /usr/bin/sed -i -e /*/ d /etc/hosts %staff ALL=(root) NOPASSWD: VAGRANT_HOSTS_ADD, VAGRANT_HOSTS_REMOVE Vagrant caches the operating system images that you download but it’ll download the installed software packages every time. You can get around this by using a plugin like vagrant-cachier or, if you’re really keen, maintain local Apt repositories (or whatever the equivalent is for your server architecture). At some point you might start getting a large number of virtual machines running on your poor hardware all at the same time, especially if you’re switching between projects a lot and each of those projects use lots of servers. We’re just getting to that stage now, so are considering a medium-term move to a containerised option like Docker, which seems to be maturing now. If you are keen not to use any command line tools whatsoever and you’re on OS X then you could check out Vagrant Manager as it looks quite shiny. Finally, there are a huge amount of resources to give you pre-built Vagrant machines from the likes of VVV for Wordpress, something similar for Perch, PuPHPet for generating various configurations, and a long list of pre-built operating systems at VagrantBox.es. Wrapping up Hopefully you can now see why it might be worthwhile to add Vagrant to your development workflow. Whether you’re an agency drafting in freelancers or a one-person band running lots of sites on your laptop using MAMP or something similar. Vagrant makes it easy to launch exact copies of the same machine in a repeatable and version controlled way. The learning curve isn’t too steep and, once configured, you can forget about it and focus on getting your work done. 2014 Darren Beale darrenbeale 2014-12-05T00:00:00+00:00 https://24ways.org/2014/what-is-vagrant-and-why-should-i-care/ process
35 SEO in 2015 (and Why You Should Care) If your business is healthy, you can always find plenty of reasons to leave SEO on your to-do list in perpetuity. After all, SEO is technical, complicated, time-consuming and potentially dangerous. The SEO industry is full of self-proclaimed gurus whose lack of knowledge can be deadly. There’s the terrifying fact that even if you dabble in SEO in the most gentle and innocent way, you might actually end up in a worse state than you were to begin with. To make matters worse, Google keeps changing the rules. There have been a bewildering number of major updates, which despite their cuddly names have had a horrific impact on website owners worldwide. Fear aside, there’s also the issue of time. It’s probably tricky enough to find the time to read this article. Setting up, planning and executing an SEO campaign might well seem like an insurmountable obstacle. So why should you care enough about SEO to do it anyway? The main reason is that you probably already see between 30% and 60% of your website traffic come from the search engines. That might make you think that you don’t need to bother, because you’re already doing so well. But you’re almost certainly wrong. If you have a look through the keyword data in your Google Webmaster Tools account, you’ll probably see that around 30–50% of the keywords used to find your website are brand names – the names of your products or companies. These are searches carried out by people who already know about you. But the people who don’t know who you are but are searching for what you sell aren’t finding you right now. This is your opportunity. If a person goes looking for a company or product by name, Google will steer them towards what they’re looking for. Their intelligence does have limits, however, and even though they know your name they won’t be completely clear about what you sell. That’s where SEO would come in. Still need more convincing? How about the fact that the seeming complexities of SEO mean that your competition are almost certainly neglecting it too. They have the same reservations as you about complexity, time and danger, and hopefully they aren’t reading this article and so are none the wiser of the well-kept secret: that 70% of SEO is easy. I’m going to lead you through what you need to do to tap into that stream of people looking for what you sell right now. What is real SEO? Real SEO is all about helping Google understand the content of your website. It’s about steering, guiding and assisting Google. Not manipulating it. It’s easy to assume that Google already understands the content and relevance of each and every page on your website, but the fact is that it needs a fair amount of hand-holding. Fortunately, helping Google along really isn’t very difficult at all. Rest assured that real SEO has nothing to do with keyword stuffing, keyword density, hacks, tricks or cunning techniques. If you hear any of these terms from your SEO advisor, run away from them as quickly as you can. Understanding your current situation – Google Analytics Before you can do anything to improve your SEO status, you need to get an idea of how you’re already doing. Below is a very quick and easy way of doing so. 1. Open up your Google Analytics account. 2. Click on the date range selector on the top-right of the interface and change the year of the first date to last year. So 12 Dec 2014 will become 12 Dec 2013. Then click on Apply. 3. Click on the All Sessions rectangle towards the top-left, click once on Organic Traffic and click Apply. 4. Click the little black-and-white squares icon that has now appeared under the date selector on the top-right, and drag the slider all the way over to Higher Precision. 5. Change the interval buttons on the top-right of the graph to Week to make this easier to digest. At this point your graph should look something like this: It’s worth noting the approximate proportion of your visitors that currently come from organic sources. 6. Click the little downwards arrow to the right of the All Sessions rectangle and choose Remove, so that we’re only looking at the organic traffic on its own. 7. Click on Select a metric next to the Sessions button above the graph and select Pages / Session. You should then see something like this: In the example above we can see that the quantity of traffic has been increasing since the middle of August, but the quality of the traffic (as measured by the number of pages per session) has fallen significantly. How you choose to view this is down to your own graph, recent history and interpretation of events, but this should give you an indication of how things stand at the present time. Trends are often much more revealing than a snapshot of a brief moment in time. Your Google Webmaster Tools data If you’re not very familiar with your Google Webmaster Tools account, it’s really worth taking ten to fifteen minutes to see what’s on offer. I can’t recommend this enough. From the point of view of an SEO health check, I’d advise you to look into the HTML Improvements, Crawl Errors and Crawl Stats, and most importantly the Search Queries. From what you see here and the trends shown in your Analytics data, you should now have a good idea of your current status. If you want to explore further, I recommend Screaming Frog as a good diagnostics tool, or Botify if your website is large or unusually complex. Combining the data into something useful Your Google Analytics session will have shown you how you’re doing from an SEO point of view in terms of the quantity and, to some extent, the quality of your visitors. But it’s only showing you what is already working. In other words: the people who are finding you on the search engines, and clicking on your links. The Google Webmaster Tools search query data, on the other hand, will give you a better idea of what isn’t working. It will show you the keyword searches that are getting you listed in the results, but which aren’t necessarily getting clicked. And it doesn’t take much by the way of expertise to see why. For example, if you see your targeted keyword, which you feel is extremely relevant, has generated over 2,000 impressions in the last month but produced only two clicks, you’ll probably find a very low average position. Bear in mind that an average position of fourteen will mean being around halfway down the second page of results. Think about how rarely you go beyond the first two or three listings, never mind to the second page of results, and you’ll understand why the click-through rate is so low. So now you have an idea of what you’re being found for at the present time. But what about the other terms? What would you like to be found for? This is one of the more common SEO mistakes, on a number of different levels. Many businesses assume that they don’t need to worry about keyword research. They think they know what terms people use to find what they sell, and they also assume that Google understands the content on their website. This is incorrect on all counts. A better starting point is to brainstorm a small number of your most obvious keywords, then run them through Google’s Keyword Planner. Ignore the information in the Ad group ideas tab, and instead go straight to the Keyword ideas tab. Rather than wade through the very unfriendly interface, I recommend downloading the data as a spreadsheet, in which not only is more detail included, but you can also slice, dice, sort and report the data as required. From there you can delete all the irrelevant columns, and start working your way through the list, deleting any irrelevant keywords as you go along. It’s around this stage that you may hit a problem in terms of where to focus your efforts. The number of reported searches for a given keyword is of course important, but so is the level of competition. Ideally, you’d like keywords with plenty of searches but not too much competition. I personally like to factor both together by adding a column that simply divides the number of searches squared by the level of competition: (number of searches × number of searches) ÷ competition There are plenty of alternatives to this basic formula, but I like it for ease of use and simplicity. Once I’ve added this column, I then sort the data by this value (largest to smallest) and I then only usually need ten to fifteen keywords at most to give me plenty of ideas to work with. This is a slightly involved but effective methodology for keyword research, as what you’re left with is a list of keywords that both Google and you consider to be relevant to the content of your website. And relevance is an important concept in SEO. Real SEO keyword research is about making sure that your customers, website and Google are all in agreement and alignment over the content of your website. Other sources of inspiration and ideas include having a look at what terms your competition are targeting, Google Trends and, of course, Google Suggest. If you’re not sure where to find these things, you can probably work out where to search for them! If you want to dive further into understanding your current search engine status, search for some of the better keywords that you just discovered and see where you rank compared to your competition. Note that it’s vital to avoid Google serving up personalised results, so either use the privacy, incognito or anonymous mode of your browser for the searches, or use a browser that you don’t normally use. I hope this is Internet Explorer. If what you find isn’t great, don’t despair: everything in SEO is fixable (terms and conditions may apply). Putting it all together You should now have a good idea of where things stand with your current search engine traffic, and a solid list of keywords that you’re not getting visitors for but very much want. All that’s left now is to work out how to use these keywords. But before we do, let’s take a quick step back. If you have in any way kept up with what’s been happening in SEO over the last couple of years, you’ll have probably heard about Google updates with names like Panda, Hummingbird, Phantom, Pirate and more. I won’t go into the technical details of what Google is doing, but it is important to understand why they’re trying to do it. At the most basic level, Google understands that there’s a very real problem with people who are trying manipulate its index. In response to this, Google is trying to clean up its results. They don’t want people getting fed up with bad results and considering other options – have you even tried Bing? This is extremely important. Remember earlier when I said that 70% of SEO was easy? That rule still applies. So, for example, if you have a list of keywords that you know are relevant to what you sell, then all you need to do is create great content for them. Incredibly, that’s all there is to it (terms and conditions apply again, unfortunately – see below). There is, however, one simple rule to be consistently followed without exception: that the content you create should not only be good quality and completely original, but it should also be written primarily for the human visitor and not the search engine spider. In other words, if you create some fantastic content for a keyword like “choosing a small business HR service”, then the article should not only make perfect sense if read out loud (as opposed to the same phrase being repeated fifteen times), but also provide real value to the person reading it. So the process is simple: Choose your keywords Create spectacular content Wait. Is it really that simple? Unfortunately there’s a lot more to the other 30% of SEO than just creating great content and waiting for the visitors. There are issues like helping Google understand the content on your pages and website, incoming links, page authority, domain authority, usage patterns, spam factors, canonical issues and much more. But there’s the often overlooked fact about Google: it actually does a reasonable job of working out what’s on your website and (to some extent) understanding the gist of it. If you’ve never done any SEO on your website but still get some traffic from Google, this is why. Even without dabbling in the other 30% of SEO, by creating the right content for the right visitors using the precise language and terminology that your potential customers are using, you’re significantly better off than your competition. And you can only gain from this. When you’ve checked this off your to-do list and made it an ingrained part of your content creation process, then you’re ready to delve into the other 30% of SEO. The not-so-easy side. Until then, work on understanding your current situation, exploring the opportunities, creating a list of good keywords, creating the right content for them, and starting 2015 with a little bit of smart, safe and real SEO. 2014 Dave Collins davecollins 2014-12-15T00:00:00+00:00 https://24ways.org/2014/seo-in-2015-and-why-you-should-care/ business
134 Photographic Palettes How many times have you seen a colour combination that just worked, a match so perfect that it just seems obvious? Now, how many times do you come up with those in your own work? A perfect palette looks easy when it’s done right, but it’s often maddeningly difficult and time-consuming to accomplish. Choosing effective colour schemes will always be more art than science, but there are things you can do that will make coming up with that oh-so-smooth palette just a little a bit easier. A simple trick that can lead to incredibly gratifying results lies in finding a strong photograph and sampling out particularly harmonious colours. Photo Selection Not all photos are created equal. You certainly want to start with imagery that fits the eventual tone you’re attempting to create. A well-lit photo of flowers might lead to a poor colour scheme for a funeral parlour’s web site, for example. It’s worth thinking about what you’re trying to say in advance, and finding a photo that lends itself to your message. As a general rule of thumb, photos that have a lot of neutral or de-saturated tones with one or two strong colours make for the best palette; bright and multi-coloured photos are harder to derive pleasing results from. Let’s start with a relatively neutral image. Sampling In the above example, I’ve surrounded the photo with three different background colours directly sampled from the photo itself. Moving from left to right, you can see how each of the sampled colours is from an area of increasingly smaller coverage within the photo, and yet there’s still a strong harmony between the photo and the background image. I don’t really need to pick the big obvious colours from the photo to create that match, I can easily concentrate on more interesting colours that might work better for what I intend. Using a similar palette, let’s apply those colour choices to a more interesting layout: In this mini-layout, I’ve re-used the same tan colour from the previous middle image as a background, and sampled out a nicely matching colour for the top and bottom overlays, as well as the two different text colours. Because these colours all fall within a narrow range, the overall balance is harmonious. What if I want to try something a little more daring? I have a photo of stacked chairs of all different colours, and I’d like to use a few more of those. No problem, provided I watch my colour contrast: Though it uses varying shades of red, green, and yellow, this palette actually works because the values are even, and the colours muted. Placing red on top of green is usually a hideous combination of death, but if the green is drab enough and the red contrasts well enough, the result can actually be quite pleasing. I’ve chosen red as my loudest colour in this palette, and left green and yellow to play the quiet supporting roles. Obviously, there are no hard and fast rules here. You might not want to sample absolutely every colour in your scheme from a photo. There are times where you’ll need a variation that’s just a little bit lighter, or a blue that’s not in the photo. You might decide to start from a photo base and tweak, or add in colours of your own. That’s okay too. Tonal Variations I’ll leave you with a final trick I’ve been using lately, a way to bring a bit more of a formula into the equation and save some steps. Starting with the same base palette I sampled from the chairs, in the above image I’ve added a pair of overlaying squares that produce tonal variations of each primary. The lighter variation is simply a solid white square set to 40% opacity, the darker one is a black square at 20%. That gives me a highlight and shadow for each colour, which would prove handy if I had to adapt this colour scheme to a larger layout. I could add a few more squares of varying opacities, or adjust the layer blending modes for different effects, but as this looks like a great place to end, I’ll leave that up to your experimental whims. Happy colouring! 2006 Dave Shea daveshea 2006-12-22T00:00:00+00:00 https://24ways.org/2006/photographic-palettes/ design
151 Get In Shape Pop quiz: what’s wrong with the following navigation? Maybe nothing. But then again, maybe there’s something bugging you about the way it comes together, something you can’t quite put your finger on. It seems well-designed, but it also seems a little… off. The design decisions that led to this eventual form were no doubt well-considered: Client: The top level needs to have a “current page” status indicator of some sort. Designer: How about a white tab? Client: Great! The second level needs to show up underneath the first level though… Designer: Okay, but that white tab I just added makes it hard to visually connect the bottom nav to the top. Client: Too late, we’ve seen the white tab and we love it. Try and make it work. Designer: Right. So I placed the second level in its own box. Client: Hmm. They seem too separated. I can’t tell that the yellow nav is the second level of the first. Designer: How about an indicator arrow? Client: Brilliant! The problem is that the end result feels awkward and forced. During the design process, little decisions were made that ultimately affect the overall shape of the navigation. What started out as a neatly contained rounded rectangle ended up as an ambiguous double shape that looks funny, though it’s often hard to pinpoint precisely why. The Shape of Things Well the why in this case is because seemingly unrelated elements in a design still end up visually interacting. Adding a new item to a page impacts everything surrounding it. In this navigation example, we’re looking at two individual objects that are close enough to each other that they form a relationship; if we reduce them to strictly their outlines, it’s a little easier to see that this particular combination registers oddly. The two shapes float with nothing really grounding them. If they were connected, perhaps it would be a different story. The white tab divides the top shape in half, leaving a gap in the middle of it. There’s very little balance in this pairing because the overall shape of the navigation wasn’t considered during the design process. Here’s another example: Gmail. Great email client, but did you ever closely look at what’s going on in that left hand navigation? The continuous blue bar around the message area spills out into the navigation. If we remove all text, we’re left with this odd configuration: Though the reasoning for anchoring the navigation highlight against the message area might be sound, the result is an irregular shape that doesn’t correspond with anything in reality. You may never consciously notice it, but once you do it’s hard to miss. One other example courtesy of last.fm: The two header areas are the same shade of pink so they appear to be closely connected. When reduced to their outlines it’s easy to see that this combination is off-balance: the edges don’t align, the sharp corners of the top shape aren’t consistent with the rounded corners of the bottom, and the part jutting out on the right of the bottom one seems fairly random. The result is a duo of oddly mis-matched shapes. Design Strategies Our minds tend to pick out familiar patterns. A clever designer can exploit this by creating references in his or her work to shapes and combinations with which viewers are already familiar. There are a few simple ideas that can be employed to help you achieve this: consistency, balance, and completion. Consistency A fairly simple way to unify the various disparate shapes on a page is by designing them with a certain amount of internal consistency. You don’t need to apply an identical size, colour, border, or corner treatment to every single shape; devolving a design into boring repetition isn’t what we’re after here. But it certainly doesn’t hurt to apply a set of common rules to most shapes within your work. Consider purevolume and its multiple rounded-corner panels. From the bottom of the site’s main navigation to the grey “Extras” panels halfway down the page (shown above), multiple shapes use a common border radius for unity. Different colours, different sizes, different content, but the consistent outlines create a strong sense of similarity. Not that every shape on the site follows this rule; they break the pattern right at the top with a darker sharp-cornered header, and again with the thumbnails below. But the design remains unified, nonetheless. Balance Arguably the biggest problem with the last.fm example earlier is one of balance. The area poking out of the bottom shape created a fairly obvious imbalance for no apparent reason. The right hand side is visually emphasized due to the greater area of pink coverage, but with the white gap left beside it, the emphasis seems unwarranted. It’s possible to create tension in your design by mismatching shapes and throwing off the balance, but when that happens unintentionally it can look like a mistake. Above are a few examples of design elements in balanced and unbalanced configurations. The examples in the top row are undeniably more pleasing to the eye than those in the bottom row. If these were fleshed out into full designs, those derived from the templates in the top row would naturally result in stronger work. Take a look at the header on 9Rules for a study in well-considered balance. On the left you’ll see a couple of paragraphs of text, on the right you have floating navigational items, and both flank the site’s logo. This unusual layout combines multiple design elements that look nothing alike, and places them together in a way that anchors each so that no one weighs down the header. Completion And finally we come to the idea of completion. Shapes don’t necessarily need hard outlines to be read visually as shapes, which can be exploited for various purposes. Notice how Zend’s mid-page “Business Topics” and “News” items (below) fade out to the right and bottom, but the placement of two of these side-by-side creates an impression of two panels rather than three disparate floating columns. By allowing the viewer’s eye to complete the shapes, they’ve lightened up the design of the page and removed inessential lines. In a busy design this technique could prove quite handy. Along the same lines, the individual shapes within your design may also be combined visually to form outlines of larger shapes. The differently-coloured header and main content/sidebar shapes on Veerle’s blog come together to form a single central panel, further emphasized by the slight drop shadow to the right. Implementation Studying how shape can be used effectively in design is simply a starting point. As with all things design-related, there are no hard and fast rules here; ultimately you may choose to bring these principles into your work more often, or break them for effect. But understanding how shapes interact within a page, and how that effect is ultimately perceived by viewers, is a key design principle you can use to impress your friends. 2007 Dave Shea daveshea 2007-12-16T00:00:00+00:00 https://24ways.org/2007/get-in-shape/ design
234 An Introduction to CSS 3-D Transforms Ladies and gentlemen, it is the second decade of the third millennium and we are still kicking around the same 2-D interface we got three decades ago. Sure, Apple debuted a few apps for OSX 10.7 that have a couple more 3-D flourishes, and Microsoft has had that Flip 3D for a while. But c’mon – 2011 is right around the corner. That’s Twenty Eleven, folks. Where is our 3-D virtual reality? By now, we should be zipping around the Metaverse on super-sonic motorbikes. Granted, the capability of rendering complex 3-D environments has been present for years. On the web, there are already several solutions: Flash; three.js in <canvas>; and, eventually, WebGL. Finally, we meagre front-end developers have our own three-dimensional jewel: CSS 3-D transforms! Rationale Like a beautiful jewel, 3-D transforms can be dazzling, a true spectacle to behold. But before we start tacking 3-D diamonds and rubies to our compositions like Liberace‘s tailor, we owe it to our users to ask how they can benefit from this awesome feature. An entire application should not take advantage of 3-D transforms. CSS was built to style documents, not generate explorable environments. I fail to find a benefit to completing a web form that can be accessed by swivelling my viewport to the Sign-Up Room (although there have been proposals to make the web just that). Nevertheless, there are plenty of opportunities to use 3-D transforms in between interactions with the interface, via transitions. Take, for instance, the Weather App on the iPhone. The application uses two views: a details view; and an options view. Switching between these two views is done with a 3-D flip transition. This informs the user that the interface has two – and only two – views, as they can exist only on either side of the same plane. Flipping from details view to options view via a 3-D transition Also, consider slide shows. When you’re looking at the last slide, what cues tip you off that advancing will restart the cycle at the first slide? A better paradigm might be achieved with a 3-D transform, placing the slides side-by-side in a circle (carousel) in three-dimensional space; in that arrangement, the last slide obviously comes before the first. 3-D transforms are more than just eye candy. We can also use them to solve dilemmas and make our applications more intuitive. Current support The CSS 3D Transforms module has been out in the wild for over a year now. Currently, only Safari supports the specification – which includes Safari on Mac OS X and Mobile Safari on iOS. The support roadmap for other browsers varies. The Mozilla team has taken some initial steps towards implementing the module. Mike Taylor tells me that the Opera team is keeping a close eye on CSS transforms, and is waiting until the specification is fleshed out. And our best friend Internet Explorer still needs to catch up to 2-D transforms before we can talk about the 3-D variety. To make matters more perplexing, Safari’s WebKit cousin Chrome currently accepts 3-D transform declarations, but renders them in 2-D space. Chrome team member Paul Irish, says that 3-D transforms are on the horizon, perhaps in one of the next 8.0 releases. This all adds up to a bit of a challenge for those of us excited by 3-D transforms. I’ll give it to you straight: missing the dimension of depth can make degradation a bit ungraceful. Unless the transform is relatively simple and holds up in non-3D-supporting browsers, you’ll most likely have to design another solution. But what’s another hurdle in a steeplechase? We web folk have had our mettle tested for years. We’re prepared to devise multiple solutions. Here’s the part of the article where I mention Modernizr, and you brush over it because you’ve read this part of an article hundreds of times before. But seriously, it’s the best way to test for CSS 3-D transform support. Use it. Even with these difficulties mounting up, trying out 3-D transforms today is the right move. The CSS 3-D transforms module was developed by the same team at Apple that produced the CSS 2D Transforms and Animation modules. Both specifications have since been adopted by Mozilla and Opera. Transforming in three-dimensions now will guarantee you’ll be ahead of the game when the other browsers catch up. The choice is yours. You can make excuses and pooh-pooh 3-D transforms because they’re too hard and only snobby Apple fans will see them today. Or, with a tip of the fedora to Mr Andy Clarke, you can get hard-boiled and start designing with the best features out there right this instant. So, I bid you, in the words of the eternal Optimus Prime… Transform and roll out. Let’s get coding. Perspective To activate 3-D space, an element needs perspective. This can be applied in two ways: using the transform property, with the perspective as a functional notation: -webkit-transform: perspective(600); or using the perspective property: -webkit-perspective: 600; See example: Perspective 1. The red element on the left uses transform: perspective() functional notation; the blue element on the right uses the perspective property These two formats both trigger a 3-D space, but there is a difference. The first, functional notation is convenient for directly applying a 3-D transform on a single element (in the previous example, I use it in conjunction with a rotateY transform). But when used on multiple elements, the transformed elements don’t line up as expected. If you use the same transform across elements with different positions, each element will have its own vanishing point. To remedy this, use the perspective property on a parent element, so each child shares the same 3-D space. See Example: Perspective 2. Each red box on the left has its own vanishing point within the parent container; the blue boxes on the right share the vanishing point of the parent container The value of perspective determines the intensity of the 3-D effect. Think of it as a distance from the viewer to the object. The greater the value, the further the distance, so the less intense the visual effect. perspective: 2000; yields a subtle 3-D effect, as if we were viewing an object from far away. perspective: 100; produces a tremendous 3-D effect, like a tiny insect viewing a massive object. By default, the vanishing point for a 3-D space is positioned at its centre. You can change the position of the vanishing point with perspective-origin property. -webkit-perspective-origin: 25% 75%; See Example: Perspective 3. 3-D transform functions As a web designer, you’re probably well acquainted with working in two dimensions, X and Y, positioning items horizontally and vertically. With a 3-D space initialised with perspective, we can now transform elements in all three glorious spatial dimensions, including the third Z dimension, depth. 3-D transforms use the same transform property used for 2-D transforms. If you’re familiar with 2-D transforms, you’ll find the basic 3D transform functions fairly similar. rotateX(angle) rotateY(angle) rotateZ(angle) translateZ(tz) scaleZ(sz) Whereas translateX() positions an element along the horizontal X-axis, translateZ() positions it along the Z-axis, which runs front to back in 3-D space. Positive values position the element closer to the viewer, negative values further away. The rotate functions rotate the element around the corresponding axis. This is somewhat counter-intuitive at first, as you might imagine that rotateX will spin an object left to right. Instead, using rotateX(45deg) rotates an element around the horizontal X-axis, so the top of the element angles back and away, and the bottom gets closer to the viewer. See Example: Transforms 1. 3-D rotate() and translate() functions around each axis There are also several shorthand transform functions that require values for all three dimensions: translate3d(tx,ty,tz) scale3d(sx,sy,sz) rotate3d(rx,ry,rz,angle) Pro-tip: These foo3d() transform functions also have the benefit of triggering hardware acceleration in Safari. Dean Jackson, CSS 3-D transform spec author and main WebKit dude, writes (to Thomas Fuchs): In essence, any transform that has a 3D operation as one of its functions will trigger hardware compositing, even when the actual transform is 2D, or not doing anything at all (such as translate3d(0,0,0)). Note this is just current behaviour, and could change in the future (which is why we don’t document or encourage it). But it is very helpful in some situations and can significantly improve redraw performance. For the sake of simplicity, my demos will use the basic transform functions, but if you’re writing production-ready CSS for iOS or Safari-only, make sure to use the foo3d() functions to get the best rendering performance. Card flip We now have all the tools to start making 3-D objects. Let’s get started with something simple: flipping a card. Here’s the basic markup we’ll need: <section class="container"> <div id="card"> <figure class="front">1</figure> <figure class="back">2</figure> </div> </section> The .container will house the 3-D space. The #card acts as a wrapper for the 3-D object. Each face of the card has a separate element: .front; and .back. Even for such a simple object, I recommend using this same pattern for any 3-D transform. Keeping the 3-D space element and the object element(s) separate establishes a pattern that is simple to understand and easier to style. We’re ready for some 3-D stylin’. First, apply the necessary perspective to the parent 3-D space, along with any size or positioning styles. .container { width: 200px; height: 260px; position: relative; -webkit-perspective: 800; } Now the #card element can be transformed in its parent’s 3-D space. We’re combining absolute and relative positioning so the 3-D object is removed from the flow of the document. We’ll also add width: 100%; and height: 100%;. This ensures the object’s transform-origin will occur in the centre of .container. More on transform-origin later. Let’s add a CSS3 transition so users can see the transform take effect. #card { width: 100%; height: 100%; position: absolute; -webkit-transform-style: preserve-3d; -webkit-transition: -webkit-transform 1s; } The .container’s perspective only applies to direct descendant children, in this case #card. In order for subsequent children to inherit a parent’s perspective, and live in the same 3-D space, the parent can pass along its perspective with transform-style: preserve-3d. Without 3-D transform-style, the faces of the card would be flattened with its parents and the back face’s rotation would be nullified. To position the faces in 3-D space, we’ll need to reset their positions in 2-D with position: absolute. In order to hide the reverse sides of the faces when they are faced away from the viewer, we use backface-visibility: hidden. #card figure { display: block; position: absolute; width: 100%; height: 100%; -webkit-backface-visibility: hidden; } To flip the .back face, we add a basic 3-D transform of rotateY(180deg). #card .front { background: red; } #card .back { background: blue; -webkit-transform: rotateY(180deg); } With the faces in place, the #card requires a corresponding style for when it is flipped. #card.flipped { -webkit-transform: rotateY(180deg); } Now we have a working 3-D object. To flip the card, we can toggle the flipped class. When .flipped, the #card will rotate 180 degrees, thus exposing the .back face. See Example: Card 1. Flipping a card in three dimensions Slide-flip Take another look at the Weather App 3-D transition. You’ll notice that it’s not quite the same effect as our previous demo. If you follow the right edge of the card, you’ll find that its corners stay within the container. Instead of pivoting from the horizontal centre, it pivots on that right edge. But the transition is not just a rotation – the edge moves horizontally from right to left. We can reproduce this transition just by modifying a couple of lines of CSS from our original card flip demo. The pivot point for the rotation occurs at the right side of the card. By default, the transform-origin of an element is at its horizontal and vertical centre (50% 50% or center center). Let’s change it to the right side: #card { -webkit-transform-origin: right center; } That flip now needs some horizontal movement with translateX. We’ll set the rotation to -180deg so it flips right side out. #card.flipped { -webkit-transform: translateX(-100%) rotateY(-180deg); } See Example: Card 2. Creating a slide-flip from the right edge of the card Cube Creating 3-D card objects is a good way to get started with 3-D transforms. But once you’ve mastered them, you’ll be hungry to push it further and create some true 3-D objects: prisms. We’ll start out by making a cube. The markup for the cube is similar to the card. This time, however, we need six child elements for all six faces of the cube: <section class="container"> <div id="cube"> <figure class="front">1</figure> <figure class="back">2</figure> <figure class="right">3</figure> <figure class="left">4</figure> <figure class="top">5</figure> <figure class="bottom">6</figure> </div> </section> Basic position and size styles set the six faces on top of one another in the container. .container { width: 200px; height: 200px; position: relative; -webkit-perspective: 1000; } #cube { width: 100%; height: 100%; position: absolute; -webkit-transform-style: preserve-3d; } #cube figure { width: 196px; height: 196px; display: block; position: absolute; border: 2px solid black; } With the card, we only had to rotate its back face. The cube, however, requires that five of the six faces to be rotated. Faces 1 and 2 will be the front and back. Faces 3 and 4 will be the sides. Faces 5 and 6 will be the top and bottom. #cube .front { -webkit-transform: rotateY(0deg); } #cube .back { -webkit-transform: rotateX(180deg); } #cube .right { -webkit-transform: rotateY(90deg); } #cube .left { -webkit-transform: rotateY(-90deg); } #cube .top { -webkit-transform: rotateX(90deg); } #cube .bottom { -webkit-transform: rotateX(-90deg); } We could remove the first #cube .front style declaration, as this transform has no effect, but let’s leave it in to keep our code consistent. Now each face is rotated, and only the front face is visible. The four side faces are all perpendicular to the viewer, so they appear invisible. To push them out to their appropriate sides, they need to be translated out from the centre of their positions. Each side of the cube is 200 pixels wide. From the cube’s centre they’ll need to be translated out half that distance, 100px. #cube .front { -webkit-transform: rotateY(0deg) translateZ(100px); } #cube .back { -webkit-transform: rotateX(180deg) translateZ(100px); } #cube .right { -webkit-transform: rotateY(90deg) translateZ(100px); } #cube .left { -webkit-transform: rotateY(-90deg) translateZ(100px); } #cube .top { -webkit-transform: rotateX(90deg) translateZ(100px); } #cube .bottom { -webkit-transform: rotateX(-90deg) translateZ(100px); } Note here that the translateZ function comes after the rotate. The order of transform functions is important. Take a moment and soak this up. Each face is first rotated towards its position, then translated outward in a separate vector. We have a working cube, but we’re not done yet. Returning to the Z-axis origin For the sake of our users, our 3-D transforms should not distort the interface when the active panel is at its resting position. But once we start pushing elements off their Z-axis origin, distortion is inevitable. In order to keep 3-D transforms snappy, Safari composites the element, then applies the transform. Consequently, anti-aliasing on text will remain whatever it was before the transform was applied. When transformed forward in 3-D space, significant pixelation can occur. See Example: Transforms 2. Looking back at the Perspective 3 demo, note that no matter how small the perspective value is, or wherever the transform-origin may be, the panel number 1 always returns to its original position, as if all those funky 3-D transforms didn’t even matter. To resolve the distortion and restore pixel perfection to our #cube, we can push the 3-D object back, so that the front face will be positioned back to the Z-axis origin. #cube { -webkit-transform: translateZ(-100px); } See Example: Cube 1. Restoring the front face to the original position on the Z-axis Rotating the cube To expose any face of the cube, we’ll need a style that rotates the cube to expose any face. The transform values are the opposite of those for the corresponding face. We toggle the necessary class on the #box to apply the appropriate transform. #cube.show-front { -webkit-transform: translateZ(-100px) rotateY(0deg); } #cube.show-back { -webkit-transform: translateZ(-100px) rotateX(-180deg); } #cube.show-right { -webkit-transform: translateZ(-100px) rotateY(-90deg); } #cube.show-left { -webkit-transform: translateZ(-100px) rotateY(90deg); } #cube.show-top { -webkit-transform: translateZ(-100px) rotateX(-90deg); } #cube.show-bottom { -webkit-transform: translateZ(-100px) rotateX(90deg); } Notice how the order of the transform functions has reversed. First, we push the object back with translateZ, then we rotate it. Finishing up, we can add a transition to animate the rotation between states. #cube { -webkit-transition: -webkit-transform 1s; } See Example: Cube 2. Rotating the cube with a CSS transition Rectangular prism Cubes are easy enough to generate, as we only have to worry about one measurement. But how would we handle a non-regular rectangular prism? Let’s try to make one that’s 300 pixels wide, 200 pixels high, and 100 pixels deep. The markup remains the same as the #cube, but we’ll switch the cube id for #box. The container styles remain mostly the same: .container { width: 300px; height: 200px; position: relative; -webkit-perspective: 1000; } #box { width: 100%; height: 100%; position: absolute; -webkit-transform-style: preserve-3d; } Now to position the faces. Each set of faces will need their own sizes. The smaller faces (left, right, top and bottom) need to be positioned in the centre of the container, where they can be easily rotated and then shifted outward. The thinner left and right faces get positioned left: 100px ((300 − 100) ÷ 2), The stouter top and bottom faces get positioned top: 50px ((200 − 100) ÷ 2). #box figure { display: block; position: absolute; border: 2px solid black; } #box .front, #box .back { width: 296px; height: 196px; } #box .right, #box .left { width: 96px; height: 196px; left: 100px; } #box .top, #box .bottom { width: 296px; height: 96px; top: 50px; } The rotate values can all remain the same as the cube example, but for this rectangular prism, the translate values do differ. The front and back faces are each shifted out 50 pixels since the #box is 100 pixels deep. The translate value for the left and right faces is 150 pixels for their 300 pixels width. Top and bottom panels take 100 pixels for their 200 pixels height: #box .front { -webkit-transform: rotateY(0deg) translateZ(50px); } #box .back { -webkit-transform: rotateX(180deg) translateZ(50px); } #box .right { -webkit-transform: rotateY(90deg) translateZ(150px); } #box .left { -webkit-transform: rotateY(-90deg) translateZ(150px); } #box .top { -webkit-transform: rotateX(90deg) translateZ(100px); } #box .bottom { -webkit-transform: rotateX(-90deg) translateZ(100px); } See Example: Box 1. Just like the cube example, to expose a face, the #box needs to have a style to reverse that face’s transform. Both the translateZ and rotate values are the opposites of the corresponding face. #box.show-front { -webkit-transform: translateZ(-50px) rotateY(0deg); } #box.show-back { -webkit-transform: translateZ(-50px) rotateX(-180deg); } #box.show-right { -webkit-transform: translateZ(-150px) rotateY(-90deg); } #box.show-left { -webkit-transform: translateZ(-150px) rotateY(90deg); } #box.show-top { -webkit-transform: translateZ(-100px) rotateX(-90deg); } #box.show-bottom { -webkit-transform: translateZ(-100px) rotateX(90deg); } See Example: Box 2. Rotating the rectangular box with a CSS transition Carousel Front-end developers have a myriad of choices when it comes to content carousels. Now that we have 3-D capabilities in our browsers, why not take a shot at creating an actual 3-D carousel? The markup for this demo takes the same form as the box, cube and card. Let’s make it interesting and have a carousel with nine panels. <div class="container"> <div id="carousel"> <figure>1</figure> <figure>2</figure> <figure>3</figure> <figure>4</figure> <figure>5</figure> <figure>6</figure> <figure>7</figure> <figure>8</figure> <figure>9</figure> </div> </div> Now, apply basic layout styles. Let’s give each panel of the #carousel 20 pixel gaps between one another, done here with left: 10px; and top: 10px;. The effective width of each panel is 210 pixels. .container { width: 210px; height: 140px; position: relative; -webkit-perspective: 1000; } #carousel { width: 100%; height: 100%; position: absolute; -webkit-transform-style: preserve-3d; } #carousel figure { display: block; position: absolute; width: 186px; height: 116px; left: 10px; top: 10px; border: 2px solid black; } Next up: rotating the faces. This #carousel has nine panels. If each panel gets an equal distribution on the carousel, each panel would be rotated forty degrees from its neighbour (360 ÷ 9). #carousel figure:nth-child(1) { -webkit-transform: rotateY(0deg); } #carousel figure:nth-child(2) { -webkit-transform: rotateY(40deg); } #carousel figure:nth-child(3) { -webkit-transform: rotateY(80deg); } #carousel figure:nth-child(4) { -webkit-transform: rotateY(120deg); } #carousel figure:nth-child(5) { -webkit-transform: rotateY(160deg); } #carousel figure:nth-child(6) { -webkit-transform: rotateY(200deg); } #carousel figure:nth-child(7) { -webkit-transform: rotateY(240deg); } #carousel figure:nth-child(8) { -webkit-transform: rotateY(280deg); } #carousel figure:nth-child(9) { -webkit-transform: rotateY(320deg); } Now, the outward shift. Back when we were creating the cube and box, the translate value was simple to calculate, as it was equal to one half the width, height or depth of the object. With this carousel, there is no size we can automatically use as a reference. We’ll have to calculate the distance of the shift by other means. Drawing a diagram of the carousel, we can see that we know only two things: the width of each panel is 210 pixels; and the each panel is rotated forty degrees from the next. If we split one of these segments down its centre, we get a right-angled triangle, perfect for some trigonometry. We can determine the length of r in this diagram with a basic tangent equation: There you have it: the panels need to be translated 288 pixels in 3-D space. #carousel figure:nth-child(1) { -webkit-transform: rotateY(0deg) translateZ(288px); } #carousel figure:nth-child(2) { -webkit-transform: rotateY(40deg) translateZ(288px); } #carousel figure:nth-child(3) { -webkit-transform: rotateY(80deg) translateZ(288px); } #carousel figure:nth-child(4) { -webkit-transform: rotateY(120deg) translateZ(288px); } #carousel figure:nth-child(5) { -webkit-transform: rotateY(160deg) translateZ(288px); } #carousel figure:nth-child(6) { -webkit-transform: rotateY(200deg) translateZ(288px); } #carousel figure:nth-child(7) { -webkit-transform: rotateY(240deg) translateZ(288px); } #carousel figure:nth-child(8) { -webkit-transform: rotateY(280deg) translateZ(288px); } #carousel figure:nth-child(9) { -webkit-transform: rotateY(320deg) translateZ(288px); } If we decide to change the width of the panel or the number of panels, we only need to plug in those two variables into our equation to get the appropriate translateZ value. In JavaScript terms, that equation would be: var tz = Math.round( ( panelSize / 2 ) / Math.tan( ( ( Math.PI * 2 ) / numberOfPanels ) / 2 ) ); // or simplified to var tz = Math.round( ( panelSize / 2 ) / Math.tan( Math.PI / numberOfPanels ) ); Just like our previous 3-D objects, to show any one panel we need only apply the reverse transform on the carousel. Here’s the style to show the fifth panel: -webkit-transform: translateZ(-288px) rotateY(-160deg); See Example: Carousel 1. By now, you probably have two thoughts: Rewriting transform styles for each panel looks tedious. Why bother doing high school maths? Aren’t robots supposed to be doing all this work for us? And you’re absolutely right. The repetitive nature of 3-D objects lends itself to scripting. We can offload all the monotonous transform styles to our dynamic script, which, if done correctly, will be more flexible than the hard-coded version. See Example: Carousel 2. Conclusion 3-D transforms change the way we think about the blank canvas of web design. Better yet, they change the canvas itself, trading in the flat surface for voluminous depth. My hope is that you took at least one peak at a demo and were intrigued. We web designers, who have rejoiced for border-radius, box-shadow and background gradients, now have an incredible tool at our disposal in 3-D transforms. They deserve just the same enthusiasm, research and experimentation we have seen on other CSS3 features. Now is the perfect time to take the plunge and start thinking about how to use three dimensions to elevate our craft. I’m breathless waiting for what’s to come. See you on the flip side. 2010 David DeSandro daviddesandro 2010-12-14T00:00:00+00:00 https://24ways.org/2010/intro-to-css-3d-transforms/ code
171 Rock Solid HTML Emails At some stage in your career, it’s likely you’ll be asked by a client to design a HTML email. Before you rush to explain that all the cool kids are using social media, keep in mind that when done correctly, email is still one of the best ways to promote you and your clients online. In fact, a recent survey showed that every dollar spent on email marketing this year generated more than $40 in return. That’s more than any other marketing channel, including the cool ones. There are a whole host of ingredients that contribute to a good email marketing campaign. Permission, relevance, timeliness and engaging content are all important. Even so, the biggest challenge for designers still remains building an email that renders well across all the popular email clients. Same same, but different Before getting into the details, there are some uncomfortable facts that those new to HTML email should be aware of. Building an email is not like building for the web. While web browsers continue their onward march towards standards, many email clients have stubbornly stayed put. Some have even gone backwards. In 2007, Microsoft switched the Outlook rendering engine from Internet Explorer to Word. Yes, as in the word processor. Add to this the quirks of the major web-based email clients like Gmail and Hotmail, sprinkle in a little Lotus Notes and you’ll soon realize how different the email game is. While it’s not without its challenges, rest assured it can be done. In my experience the key is to focus on three things. First, you should keep it simple. The more complex your email design, the more likely is it to choke on one of the popular clients with poor standards support. Second, you need to take your coding skills back a good decade. That often means nesting tables, bringing CSS inline and following the coding guidelines I’ll outline below. Finally, you need to test your designs regularly. Just because a template looks nice in Hotmail now, doesn’t mean it will next week. Setting your lowest common denominator To maintain your sanity, it’s a good idea to decide exactly which email clients you plan on supporting when building a HTML email. While general research is helpful, the email clients your subscribers are using can vary significantly from list to list. If you have the time there are a number of tools that can tell you specifically which email clients your subscribers are using. Trust me, if the testing shows almost none of them are using a client like Lotus Notes, save yourself some frustration and ignore it altogether. Knowing which email clients you’re targeting not only makes the building process easier, it can save you lots of time in the testing phase too. For the purpose of this article, I’ll be sharing techniques that give the best results across all of the popular clients, including the notorious ones like Gmail, Lotus Notes 6 and Outlook 2007. Just remember that pixel perfection in all email clients is a pipe dream. Let’s get started. Use tables for layout Because clients like Gmail and Outlook 2007 have poor support for float, margin and padding, you’ll need to use tables as the framework of your email. While nested tables are widely supported, consistent treatment of width, margin and padding within table cells is not. For the best results, keep the following in mind when coding your table structure. Set the width in each cell, not the table When you combine table widths, td widths, td padding and CSS padding into an email, the final result is different in almost every email client. The most reliable way to set the width of your table is to set a width for each cell, not for the table itself. <table cellspacing="0" cellpadding="10" border="0"> <tr> <td width="80"></td> <td width="280"></td> </tr> </table> Never assume that if you don’t specify a cell width the email client will figure it out. It won’t. Also avoid using percentage based widths. Clients like Outlook 2007 don’t respect them, especially for nested tables. Stick to pixels. If you want to add padding to each cell, use either the cellpadding attribute of the table or CSS padding for each cell, but never combine the two. Err toward nesting Table nesting is far more reliable than setting left and right margins or padding for table cells. If you can achieve the same effect by table nesting, that will always give you the best result across the buggier email clients. Use a container table for body background colors Many email clients ignore background colors specified in your CSS or the <body> tag. To work around this, wrap your entire email with a 100% width table and give that a background color. <table cellspacing="0" cellpadding="0" border="0" width="100%"> <tr> <td bgcolor=”#000000”> Your email code goes here. </td> </tr> </table> You can use the same approach for background images too. Just remember that some email clients don’t support them, so always provide a fallback color. Avoid unnecessary whitespace in table cells Where possible, avoid whitespace between your <td> tags. Some email clients (ahem, Yahoo! and Hotmail) can add additional padding above or below the cell contents in some scenarios, breaking your design for no apparent reason. CSS and general font formatting While some email designers do their best to avoid CSS altogether and rely on the dreaded <font> tag, the truth is many CSS properties are well supported by most email clients. See this comprehensive list of CSS support across the major clients for a good idea of the safe properties and those that should be avoided. Always move your CSS inline Gmail is the culprit for this one. By stripping the CSS from the <head> and <body> of any email, we’re left with no choice but to move all CSS inline. The good news is this is something you can almost completely automate. Free services like Premailer will move all CSS inline with the click of a button. I recommend leaving this step to the end of your build process so you can utilize all the benefits of CSS. Avoid shorthand for fonts and hex notation A number of email clients reject CSS shorthand for the font property. For example, never set your font styles like this. p { font:bold 1em/1.2em georgia,times,serif; } Instead, declare the properties individually like this. p { font-weight: bold; font-size: 1em; line-height: 1.2em; font-family: georgia,times,serif; } While we’re on the topic of fonts, I recently tested every conceivable variation of @font-face across the major email clients. The results were dismal, so unfortunately it’s web-safe fonts in email for the foreseeable future. When declaring the color property in your CSS, some email clients don’t support shorthand hexadecimal colors like color:#f60; instead of color:#ff6600;. Stick to the longhand approach for the best results. Paragraphs Just like table cell spacing, paragraph spacing can be tricky to get a consistent result across the board. I’ve seen many designers revert to using double <br /> or DIVs with inline CSS margins to work around these shortfalls, but recent testing showed that paragraph support is now reliable enough to use in most cases (there was a time when Yahoo! didn’t support the paragraph tag at all). The best approach is to set the margin inline via CSS for every paragraph in your email, like so: p { margin: 0 0 1.6em 0; } Again, do this via CSS in the head when building your email, then use Premailer to bring it inline for each paragraph later. If part of your design is height-sensitive and calls for pixel perfection, I recommend avoiding paragraphs altogether and setting the text formatting inline in the table cell. You might need to use table nesting or cellpadding / CSS to get the desired result. Here’s an example: <td width="200" style="font-weight:bold; font-size:1em; line-height:1.2em; font-family:georgia,'times',serif;">your height sensitive text</td> Links Some email clients will overwrite your link colors with their defaults, and you can avoid this by taking two steps. First, set a default color for each link inline like so: <a href="http://somesite.com/" style="color:#ff00ff">this is a link</a> Next, add a redundant span inside the a tag. <a href="http://somesite.com/" style="color:#ff00ff"><span style="color:#ff00ff">this is a link</span></a> To some this may be overkill, but if link color is important to your design then a superfluous span is the best way to achieve consistency. Images in HTML emails The most important thing to remember about images in email is that they won’t be visible by default for many subscribers. If you start your design with that assumption, it forces you to keep things simple and ensure no important content is suppressed by image blocking. With this in mind, here are the essentials to remember when using images in HTML email: Avoid spacer images While the combination of spacer images and nested tables was popular on the web ten years ago, image blocking in many email clients has ruled it out as a reliable technique today. Most clients replace images with an empty placeholder in the same dimensions, others strip the image altogether. Given image blocking is on by default in most email clients, this can lead to a poor first impression for many of your subscribers. Stick to fixed cell widths to keep your formatting in place with or without images. Always include the dimensions of your image If you forget to set the dimensions for each image, a number of clients will invent their own sizes when images are blocked and break your layout. Also, ensure that any images are correctly sized before adding them to your email. Some email clients will ignore the dimensions specified in code and rely on the true dimensions of your image. Avoid PNGs Lotus Notes 6 and 7 don’t support 8-bit or 24-bit PNG images, so stick with the GIF or JPG formats for all images, even if it means some additional file size. Provide fallback colors for background images Outlook 2007 has no support for background images (aside from this hack to get full page background images working). If you want to use a background image in your design, always provide a background color the email client can fall back on. This solves both the image blocking and Outlook 2007 problem simultaneously. Don’t forget alt text Lack of standards support means email clients have long destroyed the chances of a semantic and accessible HTML email. Even still, providing alt text is important from an image blocking perspective. Even with images suppressed by default, many email clients will display the provided alt text instead. Just remember that some email clients like Outlook 2007, Hotmail and Apple Mail don’t support alt text at all when images are blocked. Use the display hack for Hotmail For some inexplicable reason, Windows Live Hotmail adds a few pixels of additional padding below images. A workaround is to set the display property like so. img {display:block;} This removes the padding in Hotmail and still gives you the predicable result in other email clients. Don’t use floats Both Outlook 2007 and earlier versions of Notes offer no support for the float property. Instead, use the align attribute of the img tag to float images in your email. <img src="image.jpg" align="right"> If you’re seeing strange image behavior in Yahoo! Mail, adding align=“top” to your images can often solve this problem. Video in email With no support for JavaScript or the object tag, video in email (if you can call it that) has long been limited to animated gifs. However, some recent research I did into the HTML5 video tag in email showed some promising results. Turns out HTML5 video does work in many email clients right now, including Apple Mail, Entourage 2008, MobileMe and the iPhone. The real benefit of this approach is that if the video isn’t supported, you can provide reliable fallback content such as an animated GIF or a clickable image linking to the video in the browser. Of course, the question of whether you should add video to email is another issue altogether. If you lean toward the “yes” side check out the technique with code samples. What about mobile email? The mobile email landscape was a huge mess until recently. With the advent of the iPhone, Android and big improvements from Palm and RIM, it’s becoming less important to think of mobile as a different email platform altogether. That said, there are a few key pointers to keep in mind when coding your emails to get a decent result for your more mobile subscribers. Keep the width less than 600 pixels Because of email client preview panes, this rule was important long before mobile email clients came of age. In truth, the iPhone and Pre have a viewport of 320 pixels, the Droid 480 pixels and the Blackberry models hover around 360 pixels. Sticking to a maximum of 600 pixels wide ensures your design should still be readable when scaled down for each device. This width also gives good results in desktop and web-based preview panes. Be aware of automatic text resizing In what is almost always a good feature, email clients using webkit (such as the iPhone, Pre and Android) can automatically adjust font sizes to increase readability. If testing shows this feature is doing more harm than good to your design, you can always disable it with the following CSS rule: -webkit-text-size-adjust: none; Don’t forget to test While standards support in email clients hasn’t made much progress in the last few years, there has been continual change (for better or worse) in some email clients. Web-based providers like Yahoo!, Hotmail and Gmail are notorious for this. On countless occasions I’ve seen a proven design suddenly stop working without explanation. For this reason alone it’s important to retest your email designs on a regular basis. I find a quick test every month or so does the trick, especially in the web-based clients. The good news is that after designing and testing a few HTML email campaigns, you will find that order will emerge from the chaos. Many of these pitfalls will become quite predictable and your inbox-friendly designs will take shape with them in mind. Looking ahead Designing HTML email can be a tough pill for new designers and standardistas to swallow, especially given the fickle and retrospective nature of email clients today. With HTML5 just around the corner we are entering a new, uncertain phase. Will email client developers take the opportunity to repent on past mistakes and bring email clients into the present? The aim of groups such as the Email Standards Project is to make much of the above advice as redundant as the long-forgotten <blink> and <marquee> tags, however, only time will tell if this is to become a reality. Although not the most compliant (or fashionable) medium, the results speak for themselves – email is, and will continue to be one of the most successful and targeted marketing channels available to you. As a designer with HTML email design skills in your arsenal, you have the opportunity to not only broaden your service offering, but gain a unique appreciation of how vital standards are. Next steps Ready to get started? There are a number of HTML email design galleries to provide ideas and inspiration for your own designs. http://www.campaignmonitor.com/gallery/ http://htmlemailgallery.com/ http://inboxaward.com/ Enjoy! 2009 David Greiner davidgreiner 2009-12-13T00:00:00+00:00 https://24ways.org/2009/rock-solid-html-emails/ code
257 The (Switch)-Case for State Machines in User Interfaces You’re tasked with creating a login form. Email, password, submit button, done. “This will be easy,” you think to yourself. Login form by Selecto You’ve made similar forms many times in the past; it’s essentially muscle memory at this point. You’re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you’ll have to translate the pixels to meaningful, responsive CSS values, but that’s the least of your problems. As you’re writing up the HTML structure and CSS layout and styles for this form, you realize that you don’t know what the successful “logged in” page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work. What if login fails? Where do those errors show up? Should we show errors differently if the user forgot to enter their email, or password, or both? Or should the submit button be disabled? Should we validate the email field? When should we show validation errors – as they’re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.) When should the errors disappear? What do we show during the login process? Some loading spinner? What if loading takes too long, or a server error occurs? Many more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases. Modeling Behavior Describing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story – they often leave out potential edge-cases or small yet important bits of information. However, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine. The general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states: start - not submitted yet loading - submitted and logging in success - successfully logged in error - login failed Additionally, we can describe an application as accepting a finite number of events – that is, all the possible events that can be “sent” to the application, either from the user or some other external entity: SUBMIT - pressing the submit button RESOLVE - the server responds, indicating that login is successful REJECT - the server responds, indicating that login failed Then, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be: From the start state, when the SUBMIT event occurs, the app should be in the loading state. From the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state. If login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state. From the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state. Otherwise, if any other event occurs, don’t do anything and stay in the same state. That’s a pretty thorough description, similar to a user story! It’s also a bit more symbolic than a user story (e.g., “when the SUBMIT event occurs” instead of “when the user presses the submit button”), and that’s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like: Every state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event. From Visuals to Code Drawing a state machine doesn’t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn’t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application. With the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements: function loginMachine(state, event) { switch (state) { case 'start': if (event === 'SUBMIT') { return 'loading'; } break; case 'loading': if (event === 'RESOLVE') { return 'success'; } else if (event === 'REJECT') { return 'error'; } break; case 'success': // Accept no further events break; case 'error': if (event === 'SUBMIT') { return 'loading'; } break; default: // This should never occur return undefined; } } console.log(loginMachine('start', 'SUBMIT')); // => 'loading' This is fine (I suppose) but personally, I find it much easier to use objects: const loginMachine = { initial: "start", states: { start: { on: { SUBMIT: 'loading' } }, loading: { on: { REJECT: 'error', RESOLVE: 'success' } }, error: { on: { SUBMIT: 'loading' } }, success: {} } }; function transition(state, event) { return machine .states[state] // Look up the state .on[event] // Look up the next state based on the event || state; // If not found, return the current state } console.log(transition('start', 'SUBMIT')); As you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here: A Common Language Between Designers and Developers Although finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can: identify all possible states, and potentially missing states describe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?) eliminate impossible states and identify states that are “unreachable” (have no entry transition) or “sunken” (have no exit transition) add features with full confidence of knowing what other states it might affect simplify redundant states or complex user flows create test paths for almost every possible user flow, and easily identify edge cases collaborate better by understanding the entire application model equally. Not a New Idea I’m not the first to suggest that state machines can help bridge the gap between design and development. Vince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines User flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them. In 1999, Ian Horrocks wrote a book titled “Constructing the User Interface with Statecharts”, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today. More than a decade earlier, David Harel published “Statecharts: A Visual Formalism for Complex Systems”, in which the statechart - an extended hierarchical state machine model - is born. State machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits: Visualized modeling Precise diagrams Automatic code generation Comprehensive test coverage Accommodation of late-breaking requirements changes Moving Forward It’s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources: The World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications The Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling I gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases. I have also been working on XState, which is a library for “state machines and statecharts for the modern web”. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages). I’m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience. 2018 David Khourshid davidkhourshid 2018-12-12T00:00:00+00:00 https://24ways.org/2018/state-machines-in-user-interfaces/ code
195 Levelling Up for Junior Developers If you are a junior developer starting out in the web industry, things can often seem a little daunting. There are so many things to learn, and as soon as you’ve learnt one framework or tool, there seems to be something new out there. I am lucky enough to lead a team of developers building applications for the web. During a recent One to One meeting with one of our junior developers, he asked me about a learning path and the basic fundamentals that every developer should know. After a bit of digging around, I managed to come up with a (not so exhaustive) list of principles that was shared with him. In this article, I will share the list with you, and hopefully help you level up from junior developer and become a better developer all round. This list doesn’t focus on an particular programming language, but rather coding concepts as a whole. The idea behind this list is that whether you are a front-end developer, back-end developer, full stack developer or just a curious one, these principles apply to everyone that writes code. I have tried to be technology agnostic, so that you can use these tips to guide you, whatever your tech stack might be. Without any further ado and in no particular order, let’s get started. Refactoring code like a boss The Boy Scouts have a rule that goes “always leave the campground cleaner than you found it.” This rule can be applied to code too and ensures that you leave code cleaner than you found it. As a junior developer, it’s almost certain that you will either create or come across older code that could be improved. The resources below are a guide that will help point you in the right direction. My favourite book on this subject has to be Clean Code by Robert C. Martin. It’s a must read for anyone writing code as it helps you identify bad code and shows you techniques that you can use to improve existing code. If you find that in your day to day work you deal with a lot of legacy code, Improving Existing Technology through Refactoring is another useful read. Design Patterns are a general repeatable solution to a commonly occurring problem in software design. My friend and colleague Ranj Abass likes to refer to them as a “common language” that helps developers discuss the way that we write code as a pattern. My favourite book on this subject is Head First Design Patterns which goes right back to the basics. Another great read on this topic is Refactoring to Patterns. Working Effectively With Legacy Code is another one that I found really valuable. Improving your debugging skills A solid understanding of how to debug code is a must for any developer. Whether you write code for the web or purely back-end code, the ability to debug will save you time and help you really understand what is going on under the hood. If you write front-end code for the web, one of my favourite resources to help you understand how to debug code in Chrome can be found on the Chrome Dev Tools website. While some of the tips are specific to Chrome, these techniques apply to any modern browser of your choice. At Settled, we use Node.js for much of our server side code. Without a doubt, our most trusted IDE has to be Visual Studio Code and the built-in debuggers are amazing. Regardless of whether you use Node.js or not, there are a number of plugins and debuggers that you can use in the IDE. I recommend reading the website of your favourite IDE for more information. As a side note, it is worth mentioning that Chrome Developer Tools actually has functionality that allows you to debug Node.js code too. This makes it a seamless transition from front-end code to server-side code debugging. The Debugging Mindset is an informative online article by Devon H. O’Dell and discusses the the psychology of learning strategies that lead to effective problem-solving skills. A good understanding of relational databases and NoSQL databases Almost all developers will need to persist data at some point in their career. Even if you don’t write SQL queries in your day to day job, a solid understanding of how they work will help you become a better developer. If you are a complete newbie when it comes to databases, I recommend checking out Code Academy. They offer a free online course that can help you get your head around how relational databases work. The course is quite basic, but is a useful hands-on approach to learning this topic. This article provides a great explainer for the difference between the SQL and NoSQL databases, and this Stackoverflow answer goes a little deeper into the subject of the two database types. If you’d like to learn more about NoSQL queries, I would recommend starting with this article on MongoDB queries. Unfortunately, there isn’t one overall course as most NoSQL databases have their own syntax. You may also have noticed that I haven’t included other types of databases such as Graph or In-memory; it’s worth focussing on the basics before going any deeper. Performance on the web If you build for the web today, it is important to understand how the browser receives and renders the content that you send it. I am pretty passionate about Web Performance, and hope that everyone can learn how to make websites faster and more efficient. It can be fun at the same time! Steve Souders High Performance Websites is the godfather of web performance books. While it was created a few years ago and many of the techniques might have changed slightly, it is the original book on the subject and set up many of the ground rules that we know about web performance today. A free online resource on this topic is the Google Developers website. The site is an up to date guide on the best web performance techniques for your site. It is definitely worth a read. The network plays a key role in delivering data to your users, and it plays a big role in performance on the web. A fantastic book on this topic is Ilya Grigorik’s High Performance Browser Networking. It is also available to read online at hpbn.co. Understand the end to end architecture of your software project I find that one of the best ways to improve my knowledge is to learn about the architecture of the software at the company I work at. It gives you a good understanding as to why things are designed the way they are, why certain decisions were made, and gives you an understanding of how you might do things differently with hindsight. Try and find someone more senior, such as a Technical Lead or Software Architect, at your company and ask them to explain the overall architecture and draw a few high-level diagrams for you. Not to mention that they will be impressed with your willingness to learn. I recommend reading Clean Architecture: A Craftsman’s Guide to Software Structure and Design for more detail on this subject. Far too often, software projects can be over-engineered and over-architected, it is worth reading Just Enough Software Architecture. The book helps developers understand how the smallest of changes can affect the outcome of your software architecture. How are things deployed A big part of creating software is actually shipping it! How is the software at your company released into the wild? Does your company do Continuous Integration? Continuous Deployment? Even if you answered no to any of these questions, it is worth finding someone with the knowledge in your company to explain these things to you. If it is not already documented, perhaps you could start a wiki to document everything you’re learning about the system - this is a great way to level up and be appreciated and invaluable. A streamlined deployment process is a beautiful thing, and understanding how they work can help you grow your knowledge as a developer. Continuous Integration is a practical read on the ins and outs of implementing this deployment technique. Docker is another great tool to use when it comes to software deployment. It can be tricky at first to wrap your head around, but it is definitely worth learning about this great technology. The documentation on the website will teach and guide you on how to get started using Docker. Writing Tests Testing is an essential tool in the developer bag of skills. They help you to make big refactoring changes to your code, and feel a lot more confident knowing that your changes haven’t broken anything. There are so many benefits to testing, which make it so important for developers at every level to become acquainted with it/them. The book that started it all for me was Roy Osherove’s The Art of Unit Testing. The code in the book is written in C#, but the principles apply to every language. It’s a great, easy-to-understand read. Another great read is How Google Tests Software and covers exactly what it says on the tin. It covers many different testing techniques such as exploratory, black box, white box, and acceptance testing and really helps you understand how large organisations test their code. Soft skills Whilst reading through this article, you’ve probably noticed that a large chunk of it focusses on code and technical ability. Without a doubt, I’d say that it is even more important to be a good teammate. If you look up the definition of soft skills in the dictionary, it is defined as “personal attributes that enable someone to interact effectively and harmoniously with other people” and I think that it sums this up perfectly. Working on your “soft skills” is something that can truly help you level up in your career. You may be the world’s greatest coder, but if you colleagues can’t get along with you, your coding skills won’t matter! While you may not learn how to become the perfect co-worker overnight, I really try and live by the motto “don’t be an arsehole”. Think about how you like to be treated and then try and treat your co-workers with the same courtesy and respect. The next time you need to make a decision at work, ask yourself “is this something an arsehole would do”? If you answered yes to that question, you probably shouldn’t do it! Summary Levelling up as a junior developer doesn’t have to be scary. Focus on the fundamentals and they should hold you in good stead, regardless of the new things that come along. Software engineering is built on these great principles that have stood the test of time. Whilst researching for this article, I came across a useful Github repo that is worth mentioning. Things Every Programmer Should Know is packed with useful information. I have to admit, I didn’t know everything on there! I hope that you have found this list helpful. Some of the topics I have mentioned might not be relevant for you at this stage in your career, but should give a nudge in the right direction. After all, knowledge is power! If you are a junior developer reading this article, what would you add to it? 2017 Dean Hume deanhume 2017-12-05T00:00:00+00:00 https://24ways.org/2017/levelling-up-for-junior-developers/ code
309 HTTP/2 Server Push and Service Workers: The Perfect Partnership Being a web developer today is exciting! The web has come a long way since its early days and there are so many great technologies that enable us to build faster, better experiences for our users. One of these technologies is HTTP/2 which has a killer feature known as HTTP/2 Server Push. During this year’s Chrome Developer Summit, I watched a really informative talk by Sam Saccone, a Software Engineer on the Google Chrome team. He gave a talk entitled Planning for Performance, and one of the topics that he covered immediately piqued my interest; the idea that HTTP/2 Server Push and Service Workers were the perfect web performance combination. If you’ve never heard of HTTP/2 Server Push before, fear not - it’s not as scary as it sounds. HTTP/2 Server Push simply allows the server to send data to the browser without having to wait for the browser to explicitly request it first. In this article, I am going to run through the basics of HTTP/2 Server Push and show you how, when combined with Service Workers, you can deliver the ultimate in web performance to your users. What is HTTP/2 Server Push? When a user navigates to a URL, a browser will make an HTTP request for the underlying web page. The browser will then scan the contents of the HTML document for any assets that it may need to retrieve such as CSS, JavaScript or images. Once it finds any assets that it needs, it will then make multiple HTTP requests for each resource that it needs and begin downloading one by one. While this approach works well, the problem is that each HTTP request means more round trips to the server before any data arrives at the browser. These extra round trips take time and can make your web pages load slower. Before we go any further, let’s see what this might look like when your browser makes a request for a web page. If you were to view this in the developer tools of your browser, it might look a little something like this: As you can see from the image above, once the HTML file has been downloaded and parsed, the browser then makes HTTP requests for any assets that it needs. This is where HTTP/2 Server Push comes in. The idea behind HTTP/2 Server Push is that when the browser requests a web page from the server, the server already knows about all the assets that are needed for the web page and “pushes” it to browser. This happens when the first HTTP request for the web page takes place and it eliminates an extra round trip, making your site faster. Using the same example above, let’s “push” the JavaScript and CSS files instead of waiting for the browser to request them. The image below gives you an idea of what this might look like. Whoa, that looks different - let’s break it down a little. Firstly, you can see that the JavaScript and CSS files appear earlier in the waterfall chart. You might also notice that the loading times for the files are extremely quick. The browser doesn’t need to make an extra HTTP request to the server, instead it receives the critical files it needs all at once. Much better! There are a number of different approaches when it comes to implementing HTTP/2 Server Push. Adoption is growing and many commercial CDNs such as Akamai and Cloudflare already offer support for Server Push. You can even roll your own implementation depending on your environment. I’ve also previously blogged about building a basic HTTP/2 Server Push example using Node.js. In this post, I’m not going to dive into how to implement HTTP/2 Server Push as that is an entire post in itself! However, I do recommend reading this article to find out more about the inner workings. HTTP/2 Server Push is awesome, but it isn’t a magic bullet. It is fantastic for improving the load time of a web page when it first loads for a user, but it isn’t that great when they request the same web page again. The reason for this is that HTTP/2 Server Push is not cache “aware”. This means that the server isn’t aware about the state of your client. If you’ve visited a web page before, the server isn’t aware of this and will push the resource again anyway, regardless of whether or not you need it. HTTP/2 Server Push effectively tells the browser that it knows better and that the browser should receive the resources whether it needs them or not. In theory browsers can cancel HTTP/2 Server Push requests if they’re already got something in cache but unfortunately no browsers currently support it. The other issue is that the server will have already started to send some of the resource to the browser by the time the cancellation occurs. HTTP/2 Server Push & Service Workers So where do Service Workers fit in? Believe it or not, when combined together HTTP/2 Server Push and Service Workers can be the perfect web performance partnership. If you’ve not heard of Service Workers before, they are worker scripts that run in the background of your website. Simply put, they act as middleman between the client and the browser and enable you to intercept any network requests that come and go from the browser. They are packed with useful features such as caching, push notifications, and background sync. Best of all, they are written in JavaScript, making it easy for web developers to understand. Using Service Workers, you can easily cache assets on a user’s device. This means when a browser makes an HTTP request for an asset, the Service Worker is able to intercept the request and first check if the asset already exists in cache on the users device. If it does, then it can simply return and serve them directly from the device instead of ever hitting the server. Let’s stop for a second and analyse what that means. Using HTTP/2 Server Push, you are able to push critical assets to the browser before the browser requests them. Then, using Service Workers you are able to cache these resources so that the browser never needs to make a request to the server again. That means a super fast first load and an even faster second load! Let’s put this into action. The following HTML code is a basic web page that retrieves a few images and two JavaScript files. <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>HTTP2 Push Demo</title> </head> <body> <h1>HTTP2 Push</h1> <img src="./images/beer-1.png" width="200" height="200" /> <img src="./images/beer-2.png" width="200" height="200" /> <br> <br> <img src="./images/beer-3.png" width="200" height="200" /> <img src="./images/beer-4.png" width="200" height="200" /> <!-- Scripts --> <script async src="./js/promise.min.js"></script> <script async src="./js/fetch.js"></script> <script> // Register the service worker if ('serviceWorker' in navigator) { navigator.serviceWorker.register('./service-worker.js').then(function(registration) { // Registration was successful console.log('ServiceWorker registration successful with scope: ', registration.scope); }).catch(function(err) { // registration failed :( console.log('ServiceWorker registration failed: ', err); }); } </script> </body> </html> In the HTML code above, I am registering a Service Worker file named service-worker.js. In order to start caching assets, I am going to use the Service Worker toolbox . It is a lightweight helper library to help you get started creating your own Service Workers. Using this library, we can actually cache the base web page with the path /push. The Service Worker Toolbox comes with a built-in routing system which is based on the same routing as Express. With just a few lines of code, you can start building powerful caching patterns. I’ve add the following code to the service-worker.js file. (global => { 'use strict'; // Load the sw-toolbox library. importScripts('/js/sw-toolbox/sw-toolbox.js'); // The route for any requests toolbox.router.get('/push', global.toolbox.fastest); toolbox.router.get('/images/(.*)', global.toolbox.fastest); toolbox.router.get('/js/(.*)', global.toolbox.fastest); // Ensure that our service worker takes control of the page as soon as possible. global.addEventListener('install', event => event.waitUntil(global.skipWaiting())); global.addEventListener('activate', event => event.waitUntil(global.clients.claim())); })(self); Let’s break this code down further. Around line 4, I am importing the Service Worker toolbox. Next, I am specifying a route that will listen to any requests that match the URL /push. Because I am also interested in caching the images and JavaScript for that page, I’ve told the toolbox to listen to these routes too. The best thing about the code above is that if any of the assets exist in cache, we will instantly return the cached version instead of waiting for it to download. If the asset doesn’t exist in cache, the code above will add it into cache so that we can retrieve it when it’s needed again. You may also notice the code global.toolbox.fastest - this is important because gives you the compromise of fulfilling from the cache immediately, while firing off an additional HTTP request updating the cache for the next visit. But what does this mean when combined with HTTP/2 Server Push? Well, it means that on the first load of the web page, you are able to “push” everything to the user at once before the browser has even requested it. The Service Worker activates and starts caching the assets on the users device. The next time a user visits the page, the Service Worker will intercept the request and serve the asset directly from cache. Amazing! Using this technique, the waterfall chart for a repeat visit should look like the image below. If you look closely at the image above, you’ll notice that the web page returns almost instantly without ever making an HTTP request over the network. Using the Service Worker library, we cached the base page for the route /push, which allowed us to retrieve this directly from cache. Whether used on their own or combined together, the best thing about these two features is that they are the perfect progressive enhancement. If your user’s browser doesn’t support them, they will simply fall back to HTTP/1.1 without Service Workers. Your users may not experience as fast a load time as they would with these two techniques, but it would be no different from their normal experience. HTTP/2 Server Push and Service Workers are really the perfect partners when it comes to web performance. Summary When used correctly, HTTP/2 Server Push and Service Workers can have a positive impact on your site’s load times. Together they mean super fast first load times and even faster repeat views to a web page. Whilst this technique is really effective, it’s worth noting that HTTP/2 push is not a magic bullet. Think about the situations where it might make sense to use it and don’t just simply “push” everything; it could actually lead to having slower page load times. If you’d like to learn more about the rules of thumb for HTTP/2 Server Push, I recommend reading this article for more information. All of the code in this example is available on my Github repo - if you have any questions, please submit an issue and I’ll get back to you as soon as possible. If you’d like to learn more about this technique and others relating to HTTP/2, I highly recommend watching Sam Saccone’s talk at this years Chrome Developer Summit. I’d also like to say a massive thank you to Robin Osborne, Andy Davies and Jeffrey Posnick for helping me review this article before putting it live! 2016 Dean Hume deanhume 2016-12-15T00:00:00+00:00 https://24ways.org/2016/http2-server-push-and-service-workers/ code
207 Want to Break Out of Comparison Syndrome? Do a Media Detox “Comparison is the thief of joy.” —Theodore Roosevelt I grew up in an environment of perpetual creativity and inventiveness. My father Dennis built and flew experimental aircraft as a hobby. During my entire childhood, there was an airplane fuselage in the garage instead of a car. My mother Deloria was a self-taught master artisan who could quickly acquire any skills that it took to work with fabric and weaving. She could sew any garment she desired, and was able to weave intricate wall hangings just by looking at a black and white photos in magazines. My older sister Diane blossomed into a consummate fine artist who drew portraits with uncanny likeness, painted murals, and studied art and architecture. In addition, she loved good food and had a genius for cooking and baking, which converged in her creating remarkable art pieces out of cake that were incredibly delicious to boot. Yes. This was the household in which I grew up. While there were countless positives to being surrounded by people who were compelled to create, there was also a downside to it. I incessantly compared myself to my parents and older sister and always found myself lacking. It wasn’t a fair comparison, but tell that to a sensitive kid who wanted to fit in to her family by being creative as well. From my early years throughout my teens, I convinced myself that I would never understand how to build an airplane or at least be as proficient with tools as my father, the aeronautical engineer. Even though my sister was six years older than I was, I lamented that I would never be as good a visual artist as she was. And I marveled at my mother’s seemingly magical ability to make and tailor clothes and was certain that I would never attain her level of mastery. This habit of comparing myself to others grew over the years, continuing to subtly and effectively undermine my sense of self. I had almost reached an uneasy truce with my comparison habit when social media happened. As an early adopter of Twitter, I loved staying connected to people I met at tech conferences. However, as I began to realize my aspirations of being an author and a speaker, Twitter became a dreaded hall of mirrors where I only saw distorted reflections of my lack of achievement in other people’s success. Every person announcing a publishing deal caused me to drown under waves of envy over the imagined size of her or his book advance as I struggled to pay my mortgage. Every announcement I read of someone speaking at a conference led to thoughts of, “I wish I were speaking at that conference – I must not be good enough to be invited.” Twitter was fertile ground for my Inner Critic to run rampant. One day in 2011, my comparisons to people who I didn’t even know rose to a fever pitch. I saw a series of tweets that sparked a wave of self-loathing so profound that I spent the day sobbing and despondent, as I chastised myself for being a failure. I had fallen into the deep pit of Comparison Syndrome, and to return to anything close to being productive took a day or two of painstakingly clawing my way out. Comparison Syndrome Takes Deficiency Anxiety to Eleven Do any of these scenarios ring true? You frequently feel like a failure when viewing the success of others. You feel dispirited and paralyzed in moving forward with your own work because it will never measure up to what others have done. You discount your ideas because you fear that they aren’t as good as those of your colleagues or industry peers. Are you making yourself miserable by thinking thoughts like these? “I’m surrounded by people who are so good at what they do, how can I possibly measure up?” “Compared to my partner, my musical ability is childish – and music is no longer fun.” “Why haven’t I accomplished more by now? My peers are so much more successful than I am.” Unenviable Envy Many people use the terms envy and jealousy interchangeably, but they are two distinct emotions. Jealousy is the fear of losing someone to a perceived rival: a threat to an important relationship and the parts of the self that are served by that relationship. Jealousy is always about the relationship between three people. Envy is wanting what another has because of a perceived shortcoming on your part. Envy is always based on a social comparison to another.1 Envy is a reaction to the feeling of lacking something. Envy always reflects something we feel about ourselves, about how we are somehow deficient in qualities, possessions, or success.2 It’s based on a scarcity mentality: the idea that there is only so much to go around, and another person got something that should rightfully be yours.3 A syndrome is a condition characterized by a set of associated symptoms. I call it Comparison Syndrome because a perceived deficiency of some sort – in talent, accomplishments, success, skills, etc. – is what initially sparks it. While at the beginning you may merely feel inadequate, the onset of the syndrome will bring additional symptoms. Lack of self-trust and feelings of low self-worth will fuel increased thoughts of not-enoughness and blindness to your unique brilliance. If left unchecked, Deficiency Anxieties can escalate to full-blown Comparison Syndrome: a form of the Inner Critic in which we experience despair from envy and define ourselves as failures in light of another’s success. The irony is that when we focus so much on what we lack, we can’t see what we have in abundance that the other person doesn’t have. And in doing so, we block what is our birthright: our creative expression. Envy shackles our creativity, keeps us trapped in place, and prevents forward movement. The Inner Critic in the form of Comparison Syndrome caused by envy blocks us from utilizing our gifts, seeing our path clearly, and reveling in our creative power. In order to keep a grip on reality and not fall into the abyss of Comparison Syndrome, we’ll quell the compulsion to compare before it happens: we will free the mental bandwidth to turn our focus inward so we can start to see ourselves clearly. Break the Compulsion to Compare “Why compare yourself with others? No one in the entire world can do a better job of being you than you.” — Krystal Volney, poet and author At some point in time, many of us succumb to moments of feeling that we are lacking and comparing ourselves unfavorably to others. As social animals, much of our self-definition comes from comparison with others. This is how our personalities develop. We learn this behavior as children, and we grow up being compared to siblings, peers, and kids in the media. Because of this, the belief that somehow, someway, we aren’t good enough becomes deeply ingrained. The problem is that whenever we deem ourselves to be “less than,” our self-esteem suffers. This creates a negative feedback loop where negative thoughts produce strong emotions that result in self-defeating behaviors that beget more negative thoughts. Couple this cycle with the messages we get from society that only “gifted” people are creative, and it’s no wonder that many of us will fall down the rabbit hole of Comparison Syndrome like I did on that fated day while reading tweets. Comparing ourselves to others is worse than a zero-sum game, it’s a negative-sum game. No one wins, our self-esteem deteriorates, and our creative spark dies out. With effort, we can break the compulsion to compare and stop the decline into Comparison Syndrome by turning the focus of comparison inward to ourselves and appreciating who we’ve become. But first, we need to remove some of the instances that trigger our comparisons in the first place. Arrest: Stop the Triggers “Right discipline consists, not in external compulsion, but in the habits of mind which lead spontaneously to desirable rather than undesirable activities.” — Bertrand Russell, philosopher After my Twitter post meltdown, I knew had to make a change. While bolstering my sense of self was clearly a priority, I also knew that my ingrained comparison habit was too strong to resist and that I needed to instill discipline. I decided then and there to establish boundaries with social media. First, to maintain my sanity, I took this on as my mantra: “I will not compare myself to strangers on the Internet or acquaintances on Facebook.” If you find yourself sliding down the slippery slope of social media comparison, you can do the same: repeat this mantra to yourself to help put on the brakes. Second, in order to reduce my triggers, I stopped reading the tweets of the people I followed. However, I continued to be active on Twitter through sharing information, responding to mentions, crowdsourcing, and direct messaging people. It worked! The only time I’d start to slip into darkness were the rare instances when I would break my rules and look at my Twitstream. But we can do even more than calm ourselves with helpful mantras. Just like my example of modifying my use of Twitter, and more recently, of separating myself from Facebook, you can get some distance from the media that activates your comparison reflex and start creating the space for other habits that are more supportive to your being to take its place. Creative Dose: Trigger-free and Happy Purpose: To stop comparison triggers in their tracks Mindfulness is a wonderful tool, but sometimes you have to get hardcore and do as much as you can to eliminate distractions so that you can first hear your own thoughts in order to know which ones you need to focus on. Here are four steps to becoming trigger-free and happier. Step 1: Make a List Pay attention when you get the most triggered and hooked. Is it on Twitter, Facebook, Instagram, or Snapchat? Is it YouTube, TV shows, or magazines? Make list of your top triggers. My primary trigger is:______________________________________ My second trigger is:______________________________________ My third trigger is:______________________________________ Now that you have your list, you need to get an idea just how often you’re getting triggered. Step 2: Monitor It’s easy to think that we should track our activity on the computer, but these days, it’s no longer our computer use that is the culprit: most of us access social media and news from our phones. Fortunately, there are apps that will track the usage for both. Seeing just how much you consume media from either or both will show you how much of an accomplice the use of devices is to your comparison syndrome, and how much you need to modify your behavior accordingly. For tracking both computer use and tablet use, this app works great: RescueTime.com tracks app usage and sends a productivity report at the end of the week via email. For your phone, there are many for either platform.4 Although I recommend fully researching what is available and will work for you best, here are a few recommendations: For both platforms: Offtime, Breakfree, Checky For Android only: Flipd, AppDetox, QualityTime, Stay On Task For iOS only: Moment Install your app of choice, and see what you find. How much time are you spending on sites or apps that compel you to compare? Step 3: Just Say No Now that you know what your triggers are and how much you’re exposing yourself to them, it’s time to say No. Put yourself on a partial social media and/or media detox for a specified period of time; consider even going for a full media detox.5 I recommend starting with one month. To help you to fully commit, I recommend writing this down and posting it where you can see it. I, ___________________, commit to avoiding my comparison triggers of ___________________, ___________________, and ___________________ for the period of ___________________, starting on ___________________ and ending on ___________________ . To help you out, I’ve created a social media detox commitment sheet for you. Step 4: Block When I decided to reduce my use of Twitter and Facebook to break my comparison habit, initially I tried to rely solely on self-discipline, which was only moderately successful. Then I realized that I could use the power of technology to help. Don’t think you have to rely upon sheer willpower to block, or at least limit, your exposure to known triggers. If your primary access to the items that cause you to compare yourself to others is via computers and other digitalia, use these devices to help maintain your mental equilibrium. Here are some apps and browser extensions that you can use during your media detox to help keep yourself sane and stay away from sites that could throw you into a comparison tailspin. These apps are installed onto your computer: RescueTime.com works on both computer and mobile devices, and does a lot more than just prevent you from going to sites that will ruin your concentration, it will also track your apps usage and give you a productivity report at the end of the week. Focus and SelfControl (Mac-only) To go right to the source and prevent you from visiting sites through your browser, there are browser extensions. Not only can you put in the list of the URLs that are your points of weakness, but you can also usually set the times of the day you need the self-control the most. Google Chrome: StayFocusd, Strict Workflow, and Website Blocker Firefox: Idderall and Leechblock Safari: WasteNoTime and MindfulBrowsing Edge (or Explorer): Unfortunately, there are currently no website blocking extensions for these browsers. I currently use a browser extension to block me from using Facebook between 9:00am – 6:00pm. It’s been a boon for my sanity: I compare tons less. A bonus is that it’s been terrific for my productivity as well. Which tool will you use for your media detox time? Explore them all and then settle upon the one(s) that will work the best for you. Install it and put it to work. Despite the tool, you will still need to exercise discipline. Resist the urge to browse Instagram or Facebook while waiting for your morning train. You can do it! Step 5: Relax Instead of panicking from FOMO (Fear of Missing Out), take comfort from this thought: what you don’t know won’t affect you. Start embracing JOMO (Joy of Missing Out), and the process of rebuilding and maintaining your sanity. What will you do instead of consuming the media that compels you to compare? Here are some ideas: Read a book Go for a walk Have dinner with a friend Go watch a movie Learn how to play the harmonica Take an improv class Really, you could do anything. And depending on how much of your time and attention you’ve devoted to media, you could be recapturing a lot of lost moments, minutes, hours, and days. Step 6: Reconnect Use your recovered time and attention to focus on your life and reconnect with your true value-driven goals, higher aspirations, and activities that you’ve always wanted to do. This article is an excerpt from the book Banish Your Inner Critic by Denise Jacobs, and has been reprinted with permission. If you’d like to read more, you can find the book on Amazon. Shane Parrish, “Mental Model: Bias from Envy and Jealousy,” Farnam Street, accessed February 9, 2017. ↩ Parrish, “Mental Model: Bias from Envy and Jealousy.” ↩ Henrik Edberg, “How to Overcome Envy: 5 Effective Tips,” Practical Happiness Advice That Works | The Positivity Blog, accessed February 9, 2017. ↩ Jeremy Golden, “6 Apps to Stop Your Smartphone Addiction,” Inc.com, accessed February 10, 2017. ↩ Emily Nickerson, “How to Silence the Voice of Doubt,” The Muse, accessed February 8, 2017. ↩ 2017 Denise Jacobs denisejacobs 2017-12-19T00:00:00+00:00 https://24ways.org/2017/do-a-media-detox/ process
135 A Scripting Carol We all know the stories of the Ghost of Scripting Past – a time when the web was young and littered with nefarious scripting, designed to bestow ultimate control upon the developer, to pollute markup with event handler after event handler, and to entrench advertising in the minds of all that gazed upon her. And so it came to be that JavaScript became a dirty word, thrown out of solutions by many a Scrooge without regard to the enhancements that JavaScript could bring to any web page. JavaScript, as it was, was dead as a door-nail. With the arrival of our core philosophy that all standardistas hold to be true: “separate your concerns – content, presentation and behaviour,” we are in a new era of responsible development the Web Standards Way™. Or are we? Have we learned from the Ghosts of Scripting Past? Or are we now faced with new problems that come with new ways of implementing our solutions? The Ghost of Scripting Past If the Ghost of Scripting Past were with us it would probably say: You must remember your roots and where you came from, and realize the misguided nature of your early attempts for control. That person you see down there, is real and they are the reason you exist in the first place… without them, you are nothing. In many ways we’ve moved beyond the era of control and we do take into account the user, or at least much more so than we used to. Sadly – there is one advantage that old school inline event handlers had where we assigned and reassigned CSS style property values on the fly – we knew that if JavaScript wasn’t supported, the styles wouldn’t be added because we ended up doing them at the same time. If anything, we need to have learned from the past that just because it works for us doesn’t mean it is going to work for anyone else – we need to test more scenarios than ever to observe the multitude of browsing arrangements we’ll observe: CSS on with JavaScript off, CSS off/overridden with JavaScript on, both on, both off/not supported. It is a situation that is ripe for conflict. This may shock some of you, but there was a time when testing was actually easier: back in the day when Netscape 4 was king. Yes, that’s right. I actually kind of enjoyed Netscape 4 (hear me out, please). With NS4’s CSS implementation known as JavaScript Style Sheets, you knew that if JavaScript was off the styles were off too. The Ghost of Scripting Present With current best practice – we keep our CSS and JavaScript separate from each other. So what happens when some of our fancy, unobtrusive DOM Scripting doesn’t play nicely with our wonderfully defined style rules? Lets look at one example of a collapsing and expanding menu to illustrate where we are now: Simple Collapsing/Expanding Menu Example We’re using some simple JavaScript (I’m using jquery in this case) to toggle between a CSS state for expanded and not expanded: JavaScript $(document).ready(function(){ TWOFOURWAYS.enableTree(); }); var TWOFOURWAYS = new Object(); TWOFOURWAYS.enableTree = function () { $("ul li a").toggle(function(){ $(this.parentNode).addClass("expanded"); }, function() { $(this.parentNode).removeClass("expanded"); }); return false; } CSS ul li ul { display: none; } ul li.expanded ul { display: block; } At this point we’ve separated our presentation from our content and behaviour, and all is well, right? Not quite. Here’s where I typically see failures in the assessment work that I do on web sites and applications (Yes, call me Scrooge – I don’t care!). We know our page needs to work with or without scripting, and we know it needs to work with or without CSS. All too often the testing scenarios don’t take into account combinations. Testing it out So what happens when we test this? Make sure you test with: CSS off JavaScript off Use the simple example again. With CSS off, we revert to a simple nested list of links and all functionality is maintained. With JavaScript off, however, we run into a problem – we have now removed the ability to expand the menu using the JavaScript triggered CSS change. Hopefully you see the problem – we have a JavaScript and CSS dependency that is critical to the functionality of the page. Unobtrusive scripting and binary on/off tests aren’t enough. We need more. This Ghost of Scripting Present sighting is seen all too often. Lets examine the JavaScript off scenario a bit further – if we require JavaScript to expand/show the branch of the tree we should use JavaScript to hide them in the first place. That way we guarantee functionality in all scenarios, and have achieved our baseline level of interoperability. To revise this then, we’ll start with the sub items expanded, use JavaScript to collapse them, and then use the same JavaScript to expand them. HTML <ul> <li><a href="#">Main Item</a> <ul class="collapseme"> <li><a href="#">Sub item 1</a></li> <li><a href="#">Sub item 2</a></li> <li><a href="#">Sub item 3</a></li> </ul> </li> </ul> CSS /* initial style is expanded */ ul li ul.collapseme { display: block; } JavaScript // remove the class collapseme after the page loads $("ul ul.collapseme").removeClass("collapseme"); And there you have it – a revised example with better interoperability. This isn’t rocket surgery by any means. It is a simple solution to a ghostly problem that is too easily overlooked (and often is). The Ghost of Scripting Future Well, I’m not so sure about this one, but I’m guessing that in a few years’ time, we’ll all have seen a few more apparitions and have a few more tales to tell. And hopefully we’ll be able to share them on 24 ways. Thanks to Drew for the invitation to contribute and thanks to everyone else out there for making this a great (and haunting) year on the web! 2006 Derek Featherstone derekfeatherstone 2006-12-21T00:00:00+00:00 https://24ways.org/2006/a-scripting-carol/ code
335 Naughty or Nice? CSS Background Images Web Standards based development involves many things – using semantically sound HTML to provide structure to our documents or web applications, using CSS for presentation and layout, using JavaScript responsibly, and of course, ensuring that all that we do is accessible and interoperable to as many people and user agents as we can. This we understand to be good. And it is good. Except when we don’t clearly think through the full implications of using those techniques. Which often happens when time is short and we need to get things done. Here are some naughty examples of CSS background images with their nicer, more accessible counterparts. Transaction related messages I’m as guilty of this as others (or, perhaps, I’m the only one that has done this, in which case this can serve as my holiday season confessional) We use lovely little icons to show status messages for a transaction to indicate if the action was successful, or was there a warning or error? For example: “Your postal/zip code was not in the correct format.” Notice that we place a nice little icon there, and use background colours and borders to convey a specific message: there was a problem that needs to be fixed. Notice that all of this visual information is now contained in the CSS rules for that div: <div class="error"> <p>Your postal/zip code was not in the correct format.</p> </div> div.error { background: #ffcccc url(../images/error_small.png) no-repeat 5px 4px; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; } Using this approach also makes it very easy to create a div.success and div.warning CSS rules meaning we have less to change in our HTML. Nice, right? No. Naughty. Visual design communicates The CSS is being used to convey very specific information. The choice of icon, the choice of background colour and borders tell us visually that there is something wrong. With the icon as a background image – there is no way to specify any alt text for the icon, and significant meaning is lost. A screen reader user, for example, misses the fact that it is an “error.” The solution? Ask yourself: what is the bare minimum needed to indicate there was an error? Currently in the absence of CSS there will be no icon – which (I’m hoping you agree) is critical to communicating there was an error. The icon should be considered content and not simply presentational. The borders and background colour are certainly much less critical – they belong in the CSS. Lets change the code to place the image directly in the HTML and using appropriate alt text to better communicate the meaning of the icon to all users: <div class="bettererror"> <img src="images/error_small.png" alt="Error" /> <p>Your postal/zip code was not in the correct format.</p> </div> div.bettererror { background-color: #ffcccc; color: #900; border-top: 1px solid #c00; border-bottom: 1px solid #c00; padding: 0.25em 0.5em 0.25em 2.5em; font-weight: bold; position: relative; min-height: 1.25em; } div.bettererror img { display: block; position: absolute; left: 0.25em; top: 0.25em; padding: 0; margin: 0; } div.bettererror p { position: absolute; left: 2.5em; padding: 0; margin: 0; } Compare these two examples of transactional messages Status of a Record This example is pretty straightforward. Consider the following: a real estate listing on a web site. There are three “states” for a listing: new, normal, and sold. Here’s how they look: Example of a New Listing Example of A Sold Listing If we (forgive the pun) blindly apply the “use a CSS background image” technique we clearly run into problems with the new and sold images – they actually contain content with no way to specify an alternative when placed in the CSS. In this case of the “new” image, we can use the same strategy as we used in the first example (the transaction result). The “new” image should be considered content and is placed in the HTML as part of the <h2>...</h2> that identifies the listing. However when considering the “sold” listing, there are less changes to be made to keep the same look by leaving the “SOLD” image as a background image and providing the equivalent information elsewhere in the listing – namely, right in the heading. For those that can’t see the background image, the status is communicated clearly and right away. A screen reader user that is navigating by heading or viewing a listing will know right away that a particular property is sold. Of note here is that in both cases (new and sold) placing the status near the beginning of the record helps with a zoom layout as well. Better Example of A Sold Listing Summary Remember: in the holiday season, its what you give that counts!! Using CSS background images is easy and saves time for you but think of the children. And everyone else for that matter… CSS background images should only be used for presentational images, not for those that contain content (unless that content is already represented and readily available elsewhere). 2005 Derek Featherstone derekfeatherstone 2005-12-20T00:00:00+00:00 https://24ways.org/2005/naughty-or-nice-css-background-images/ code
59 Animating Your Brand Let’s talk about how we add animation to our designs, in a way that’s consistent with other aspects of our brand, such as fonts, colours, layouts and everything else. Animating is fun. Adding animation to our designs can bring them to life and make our designs stand out. Animations can show how the pieces of our designs fit together. They provide context and help people use our products. All too often animation is something we tack on at the end. We put a transition on a modal window or sliding menu and we often don’t think about whether that animation is consistent with our overall design. Style guides to the rescue A style guide is a document that establishes and enforces style to improve communication. It can cover anything from typography and writing style to ethics and other, broader goals. It might be a static visual document showing every kind of UI, like in the Codecademy.com redesign shown below. UI toolkit from “Reimagining Codecademy.com” by @mslima It might be a technical reference with code examples. CodePen’s new design patterns and style guide is a great example of this, showing all the components used throughout the website as live code. CodePen’s design patterns and style guide A style guide gives a wide view of your project, it maintains consistency when adding new content, and we can use our style guide to present animations. Living documents Style guides don’t need to be static. We can use them to show movement. We can share CSS keyframe animations or transitions that can then go into production. We can also explain why animation is there in the first place. Just as a style guide might explain why we chose a certain font or layout, we can use style guides to explain the intent behind animation. This means that if someone else wants to create a new component, they will know why animation applies. If you haven’t yet set up a style guide, you might want to take a look at Pattern Lab. It’s a great tool for setting up your own style guide and includes loads of design patterns to get started. There are many style guide articles linked from the excellent, open sourced, Website Style Guide Resources. Anna Debenham also has an excellent pocket book on the subject. Adding animation Before you begin throwing animation at all the things, establish the character you want to convey. Andrex Puppy (British TV ad from 1994) List some words that describe the character you’re aiming for. If it was the Andrex brand, they might have gone for: fun, playful, soft, comforting. Perhaps you’re aiming for something more serious, credible and authoritative. Or maybe exciting and intense, or relaxing and meditative. For each scenario, the animations that best represent these words will be different. In the example below, two animations both take the same length of time, but use different timing functions. One eases, and the other bounces around. Either might be good, depending on your needs. Timing functions (CodePen) Example: Kitman Labs Working with Kitman Labs, we spent a little time working out what words best reflected the brand and came up with the following: Scientific Precise Fast Solid Dependable Helpful Consistent Clear With such a list of words in hand, we design animation that fits. We might prefer a tween that moves quickly to its destination over one that drifts slowly or bounces. We can use the list when justifying our use of animation, such as when it helps our customers understand the context of data on the page. Or we may even choose not to animate, when that might make the message inconsistent. Create guidelines If you already have a style guide, adding animation could begin with creating an overview section. One approach is to create a local website and share it within your organisation. We recently set up a local site for this purpose. A recent project’s introduction to the topic of animation This document becomes a reference when adding animation to components. Include links to related resources or examples of animation to help demonstrate the animation style you want. Prototyping You can explain the intent of your animation style guide with live animations. This doesn’t just mean waving our hands around. We can show animation through prototypes. There are so many prototype tools right now. You could use Invision, Principle, Floid, or even HTML and CSS as embedded CodePens. A login flow prototype created in Principle These tools help when trying out ideas and working through several approaches. Create videos, animated GIFs or online demos to share with others. Experiment. Find what works for you and work with whatever lets you get the most ideas out of your head fastest. Iterate and refine an animation before it gets anywhere near production. Build up a collection Build up your guide, one animation at a time. Some people prefer to loosely structure a guide with places to put things as they are discovered or invented; others might build it one page at a time – it doesn’t matter. The main thing is that you collect animations like you would trading cards. Or Pokemon. Keep them ready to play and deliver that explosive result. You could include animated GIFs, or link to videos or even live webpages as examples of animation. The use of animation to help user experience is also covered nicely in Val Head’s UI animation and UX article on A List Apart. What matters is that you create an organised place for them to be found. Here are some ideas to get started. Logos and brandmarks Many sites include some subtle form of animation in their logos. This can draw the eye, add some character, or bring a little liveliness to an otherwise static page. Yahoo and Google have been experimenting with animation on their logos. Even a simple bouncing animation, such as the logo on Hop.ie, can add character. The CSS-animated bouncer from Hop.ie Content transitions Adding content, removing content, showing and hiding messages are all opportunities to use animation. Careful and deliberate use of animation helps convey what’s changing on screen. Animating list items with CSS (CSSAnimation.rocks) For more detail on this, I also recommend “Transitional Interfaces” by Pasquale D’Silva. Page transitions On a larger scale than the changes to content, full-page transitions can smooth the flow between sections of a site. Medium’s article transitions are a good example of this. Medium-style page transition (Tympanus.net) Preparing a layout before the content arrives We can use animation to draw a page before the content is ready, such as when a page calls a server for data before showing it. Optimistic loading grid (CodePen) Sometimes it’s good to show something to let the user know that everything’s going well. A short animation could cover just enough time to load the initial content and make the loading transition feel seamless. Interactions Hover effects, dropdown menus, slide-in menus and active states on buttons and forms are all opportunities. Look for ways you can remove the sudden changes and help make the experience of using your UI feel smoother. Form placeholder animation (Studio MDS) Keep animation visible It takes continuous effort to maintain a style guide and keep it up to date, but it’s worth it. Make it easy to include animation and related design decisions in your documentation and you’ll be more likely to do so. If you can make it fun, and be proud of the result, better still. When updating your style guide, be sure to show the animations at the same time. This might mean animated GIFs, videos or live embedded examples of your components. By doing this you can make animation integral to your design process and make sure it stays relevant. Inspiration and resources There are loads of great resources online to help you get started. One of my favourites is IBM’s design language site. IBM’s design language: animation design guidelines IBM describes how animation principles apply to its UI work and components. They break down the animations into five categories of animations and explain how they apply to each example. The site also includes an animation library with example videos of animations and links to source code. Example component from IBM’s component library The way IBM sets out its aims and methods is helpful not only for their existing designers and developers, but also helps new hires. Furthermore, it’s a good way to show the world that IBM cares about these details. Another popular animation resource is Google’s material design. Google’s material design documentation Google’s guidelines cover everything from understanding easing through to creating engaging and useful mobile UI. This approach is visible across many of Google’s apps and software, and has influenced design across much of the web. The site is helpful both for learning about animation and as an showcase of how to illustrate examples. Frameworks If you don’t want to create everything from scratch, there are resources you can use to start using animation in your UI. One such resource is Salesforce’s Lightning design system. The system goes further than most guides. It includes a downloadable framework for adding animation to your projects. It has some interesting concepts, such as elevation settings to handle positioning on the z-axis. Example of elevation from Salesforce’s Lightning design system You should also check out Animate.css. “Just add water” — Animate.css Animate.css gives you a set of predesigned animations you can apply to page elements using classes. If you use JavaScript to add or remove classes, you can then trigger complex animations. It also plays well with scroll-triggering, and tools such as WOW.js. Learn, evolve and make it your own There’s a wealth online of information and guides we can use to better understand animation. They can inspire and kick-start our own visual and animation styles. So let’s think of the design of animations just as we do fonts, colours and layouts. Let’s choose animation deliberately, making it part of our style guides. Many thanks to Val Head for taking the time to proofread and offer great suggestions for this article. 2015 Donovan Hutchinson donovanhutchinson 2015-12-01T00:00:00+00:00 https://24ways.org/2015/animating-your-brand/ design
16 URL Rewriting for the Fearful I think it was Marilyn Monroe who said, “If you can’t handle me at my worst, please just fix these rewrite rules, I’m getting an internal server error.” Even the blonde bombshell hated configuring URL rewrites on her website, and I think most of us know where she was coming from. The majority of website projects I work on require some amount of URL rewriting, and I find it mildly enjoyable — I quite like a good rewrite rule. I suspect you may not share my glee, so in this article we’re going to go back to basics to try to make the whole rigmarole more understandable. When we think about URL rewriting, usually that means adding some rules to an .htaccess file for an Apache web server. As that’s the most common case, that’s what I’ll be sticking to here. If you work with a different server, there’s often documentation specifically for translating from Apache’s mod_rewrite rules. I even found an automatic converter for nginx. This isn’t going to be a comprehensive guide to every URL rewriting problem you might ever have. That would take us until Christmas. If you consider yourself a trial-and-error dabbler in the HTTP 500-infested waters of URL rewriting, then hopefully this will provide a little bit more of a basis to help you figure out what you’re doing. If you’ve ever found yourself staring at the white screen of death after screwing up your .htaccess file, don’t worry. As Michael Jackson once insipidly whined, you are not alone. The basics Rewrite rules form part of the Apache web server’s configuration for a website, and can be placed in a number of different locations as part of your virtual host configuration. By far the simplest and most portable option is to use an .htaccess file in your website root. Provided your server has mod_rewrite available, all you need to do to kick things off in your .htaccess file is: RewriteEngine on The general formula for a rewrite rule is: RewriteRule URL/to/match URL/to/use/if/it/matches [options] When we talk about URL rewriting, we’re normally talking about one of two things: redirecting the browser to a different URL; or rewriting the URL internally to use a particular file. We’ll look at those in turn. Redirects Redirects match an incoming URL, and then redirect the user’s browser to a different address. These can be useful for maintaining legacy URLs if content changes location as part of a site redesign. Redirecting the old URL to the new location makes sure that any incoming links, such as those from search engines, continue to work. In 1998, Sir Tim Berners-Lee wrote that Cool URIs don’t change, encouraging us all to go the extra mile to make sure links keep working forever. I think that sometimes it’s fine to move things around — especially to correct bad URL design choices of the past — provided that you can do so while keeping those old URLs working. That’s where redirects can help. A redirect might look like this RewriteRule ^article/used/to/be/here.php$ /article/now/lives/here/ [R=301,L] Rewriting By default, web servers closely map page URLs to the files in your site. On receiving a request for http://example.com/about/history.html the server goes to the configured folder for the example.com website, and then goes into the about folder and returns the history.html file. A rewrite rule changes that process by breaking the direct relationship between the URL and the file system. “When there’s a request for /about/history.html” a rewrite rule might say, “use the file /about_section.php instead.” This opens up lots of possibilities for creative ways to map URLs to the files that know how to serve up the page. Most MVC frameworks will have a single rule to rewrite all page URLs to one single file. That file will be a script which kicks off the framework to figure out what to do to serve the page. RewriteRule ^for/this/url/$ /use/this/file.php [L] Matching patterns By now you’ll have noted the weird ^ and $ characters wrapped around the URL we’re trying to match. That’s because what we’re actually using here is a pattern. Technically, it is what’s called a Perl Compatible Regular Expression (PCRE) or simply a regex or regexp. We’ll call it a pattern because we’re not animals. What are these patterns? If I asked you to enter your credit card expiry date as MM/YY then chances are you’d wonder what I wanted your credit card details for, but you’d know that I wanted a two-digit month, a slash, and a two-digit year. That’s not a regular expression, but it’s the same idea: using some placeholder characters to define the pattern of the input you’re trying to match. We’ve already met two regexp characters. ^ Matches the beginning of a string $ Matches the end of a string When a pattern starts with ^ and ends with $ it’s to make sure we match the complete URL start to finish, not just part of it. There are lots of other ways to match, too: [0-9] Matches a number, 0–9. [2-4] would match numbers 2 to 4 inclusive. [a-z] Matches lowercase letters a–z [A-Z] Matches uppercase letters A–Z [a-z0-9] Combining some of these, this matches letters a–z and numbers 0–9 These are what we call character groups. The square brackets basically tell the server to match from the selection of characters within them. You can put any specific characters you’re looking for within the brackets, as well as the ranges shown above. However, all these just match one single character. [0-9] would match 8 but not 84 — to match 84 we’d need to use [0-9] twice. [0-9][0-9] So, if we wanted to match 1984 we could to do this: [0-9][0-9][0-9][0-9] …but that’s getting silly. Instead, we can do this: [0-9]{4} That means any character between 0 and 9, four times. If we wanted to match a number, but didn’t know how long it might be (for example, a database ID in the URL) we could use the + symbol, which means one or more. [0-9]+ This now matches 1, 123 and 1234567. Putting it into practice Let’s say we need to write a rule to match article URLs for this website, and to rewrite them to use /article.php under the hood. The articles all have URLs like this: 2013/article-title/ They start with a year (from 2005 up to 2013, currently), a slash, and then have a URL-safe version of the article title (a slug), ending in a slash. We’d match it like this: ^[0-9]{4}/[a-z0-9-]+/$ If that looks frightening, don’t worry. Breaking it down, from the start of the URL (^) we’re looking for four numbers ([0-9]{4}). Then a slash — that’s just literal — and then anything lowercase a–z or 0–9 or a dash ([a-z0-9-]) one or more times (+), ending in a slash (/$). Putting that into a rewrite rule, we end up with this: RewriteRule ^[0-9]{4}/[a-z0-9-]+/$ /article.php We’re getting close now. We can match the article URLs and rewrite them to use article.php. Now we just need to make sure that article.php knows which article it’s supposed to display. Capturing groups, and replacements When rewriting URLs you’ll often want to take important parts of the URL you’re matching and pass them along to the script that handles the request. That’s usually done by adding those parts of the URL on as query string arguments. For our example, we want to make sure that article.php knows the year and the article title we’re looking for. That means we need to call it like this: /article.php?year=2013&slug=article-title To do this, we need to mark which parts of the pattern we want to reuse in the destination. We do this with round brackets or parentheses. By placing parentheses around parts of the pattern we want to reuse, we create what’s called a capturing group. To capture an important part of the source URL to use in the destination, surround it in parentheses. Our pattern now looks like this, with parentheses around the parts that match the year and slug, but ignoring the slashes: ^([0-9]{4})/([a-z0-9-]+)/$ To use the capturing groups in the destination URL, we use the dollar sign and the number of the group we want to use. So, the first capturing group is $1, the second is $2 and so on. (The $ is unrelated to the end-of-pattern $ we used before.) RewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 The value of the year capturing group gets used as $1 and the article title slug is $2. Had there been a third group, that would be $3 and so on. In regexp parlance, these are called back-references as they refer back to the pattern. Options Several brain-taxing minutes ago, I mentioned some options as the final part of a rewrite rule. There are lots of options (or flags) you can set to change how the rule is processed. The most useful (to my mind) are: R=301 Perform an HTTP 301 redirect to send the user’s browser to the new URL. A status of 301 means a resource has moved permanently and so it’s a good way of both redirecting the user to the new URL, and letting search engines know to update their indexes. L Last. If this rule matches, don’t bother processing the following rules. Options are set in square brackets at the end of the rule. You can set multiple options by separating them with commas: RewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 [L] or RewriteRule ^about/([a-z0-9-]+).jsp/$ /about/$1/ [R=301,L] Common pitfalls Once you’ve built up a few rewrite rules, things can start to go wrong. You may have been there: a rule which looks perfectly good is somehow not matching. One common reason for this is hidden behind that [L] flag. L for Last is a useful option to tell the rewrite engine to stop once the rule has been matched. This is what it does — the remaining rules in the .htaccess file are then ignored. However, once a URL has been rewritten, the entire set of rules are then run again on the new URL. If the new URL matches any of the rules, that too will be rewritten and on it goes. One way to avoid this problem is to keep your ‘real’ pages under a folder path that will never match one of your rules, or that you can exclude from the rewrite rules. Useful snippets I find myself reusing the same few rules over and over again, just with minor changes. Here are some useful examples to refer back to. Excluding a directory As mentioned above, if you’re rewriting lots of fancy URLs to a collection of real files it can be helpful to put those files in a folder and exclude it from rewrite rules. This helps solve the issue of rewrite rules reapplying to your newly rewritten URL. To exclude a directory, put a rule like this at the top of your file, before your other rules. Our files are in a folder called _source, the dash in the rule means do nothing, and the L flag means the following rules won’t be applied. RewriteRule ^_source - [L] This is also useful for excluding things like CMS folders from your website’s rewrite rules RewriteRule ^perch - [L] Adding or removing www from the domain Some folk like to use a www and others don’t. Usually, it’s best to pick one and go with it, and redirect the one you don’t want. On this site, we don’t use www.24ways.org so we redirect those requests to 24ways.org. This uses a RewriteCond which is like an if for a rewrite rule: “If this condition matches, then apply the following rule.” In this case, it’s if the HTTP HOST (or domain name, basically) matches this pattern, then redirect everything: RewriteCond %{HTTP_HOST} ^www.24ways.org$ [NC] RewriteRule ^(.*)$ http://24ways.org/$1 [R=301,L] The [NC] flag means ‘no case’ — the match is case-insensitive. The dots in the domain are escaped with a backslash, as a dot is a regular expression character which means match anything, so we escape it because we literally mean a dot in this instance. Removing file extensions Sometimes all you need to do to tidy up a URL is strip off the technology-specific file extension, so that /about/history.php becomes /about/history. This is easily achieved with the help of some more rewrite conditions. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule ^(.+)$ $1.php [L,QSA] This says if the file being asked for isn’t a file (!-f) and if it isn’t a directory (!-d) and if the file name plus .php is an actual file (-f) then rewrite by adding .php on the end. The QSA flag means ‘query string append’: append the existing query string onto the rewritten URL. It’s these sorts of more generic catch-all rules that you need to watch out for when your .htaccess gets rerun after a successful match. Without care they can easily rematch the newly rewritten URL. Logging for when it all goes wrong Although not possible within your .htaccess file, if you have access to your Apache configuration files you can enable rewrite logging. This can be useful to track down where a rule is going wrong, if it’s matching incorrectly or failing to match. It also gives you an overview of the amount of work being done by the rewrite engine, enabling you to rearrange your rules and maximise performance. RewriteEngine On RewriteLog "/full/system/path/to/rewrite.log" RewriteLogLevel 5 To be doubly clear: this will not work from an .htaccess file — it needs to be added to the main Apache configuration files. (I sometimes work using MAMP PRO locally on my Mac, and this can be pasted into the snappily named Customized virtual host general settings box in the Advanced tab for your site.) The white screen of death One of the most frustrating things when working with rewrite rules is that when you make a mistake it can result in the server returning an HTTP 500 Internal Server Error. This in itself isn’t an error message, of course. It’s more of a notification that an error has occurred. The real error message can usually be found in your Apache error log. If you have access to your server logs, check the Apache error log and you’ll usually find a much more descriptive error message, pointing you towards your mistake. (Again, if using MAMP PRO, go to Server, Apache and the View Log button.) In conclusion Rewriting URLs can be a bear, but the advantages are clear. Keeping a tidy URL structure, disconnected from the technology or file structure of your site can result in URLs that are easier to use and easier to maintain into the future. If you’re redesigning a site, remember that cool URIs don’t change, so budget some time to make sure that any content you move has a rewrite rule associated with it to keep any links working. Further reading To find out more about URL rewriting and perhaps even learn more about regular expressions, I can recommend the following resources. From the horse’s mouth, the Apache mod_rewrite documentation Particularly useful with that documentation is the RewriteRule Flags listing You may wish to don sunglasses to follow the otherwise comprehensive Regular-Expressions.info tutorial Friend of 24 ways, Neil Crosby has a mod_rewrite Beginner’s Guide which I’ve found handy over the years. As noted at the start, this isn’t a fully comprehensive guide, but I hope it’s useful in finding your feet with a powerful but sometimes annoying technology. Do you have useful snippets you often use on projects? Feel free to share them in the comments. 2013 Drew McLellan drewmclellan 2013-12-01T00:00:00+00:00 https://24ways.org/2013/url-rewriting-for-the-fearful/ code
29 What It Takes to Build a Website In 1994 we lost Kurt Cobain and got the world wide web as a weird consolation prize. In the years that followed, if you’d asked me if I knew how to build a website I’d have said yes, I know HTML, so I know how to build a website. If you’d then asked me what it takes to build a website, I’d have had to admit that HTML would hardly feature. Among the design nerdery and dev geekery it’s easy to think that the nuts and bolts of building a page just need to be multiplied up and Ta-da! There’s your website. That can certainly be true with weekend projects and hackery for fun. It works for throwing something together on GitHub or experimenting with ideas on your personal site. But what about working professionally on client projects? The web is important, so we need to build it right. It’s 2015 – your job involves people paying you money for building websites. What does it take to build a website and to do it right? What practices should we adopt to make really great, successful and professional web projects in 2015? I put that question to some friends and 24 ways authors to see what they thought. Getting the tech right Inevitably, it all starts with the technology. We work in a technical medium, after all. From Notepad and WinFTP through to continuous integration and deployment – how do you build sites? Create a stable development environment There’s little more likely to send a web developer into a wild panic and a client into a wild rage than making a new site live and things just not working. That’s why it’s important to have realistic development and staging environments that mimic the live server as closely as possible. Are you in the habit of developing new sites right on the client’s server? Or maybe in a subfolder on your local machine? It’s time to reconsider. Charlie Perrins writes: Don’t work on a live server – this feels like one of those gear-changing moments for a developer’s growth. Build something that works just as well locally on your own machine as it does on a live server, and capture the differences in the code between the local and live version in a single config file. Ultimately, if you can get all the differences between environments down to a config level then you’ll be in a really good position to automate the deployment process at some point in the future. Anything that creates a significant difference between the development and the live environments has the potential to cause problems you won’t know about until the site goes live – and at that point the problems are very public and very embarrassing, not to mention unprofessional. A reasonable solution is to use a tool like MAMP PRO which enables you to set up an individual local website for each project you work on. Importantly, individual sites give you both consistency of paths between development and live, but also the ability to configure server options (like PHP versions and configuration, for example) to match the live site. Better yet is to use a virtual machine, managed with a tool such as Vagrant. If you’re interested in learning more about that, we have an article on that subject later in the series. Use source control Trent Walton writes: We use source control, and it’s become the centerpiece for how we handle collaboration, enhancements, and issues. It drives our process. I’m hoping by now that you’re either using source control for all your work, or feeling a nagging guilt that you should be. Be it Git, Mercurial, Subversion (name your poison), a revision control system enables you to keep track of changes, revert anything that breaks, and keep rolling backups of your project. The benefits only start there, and Charlie Perrins recommends using source control “not just as a personal backup of your code, but as a way to play nicely with other developers.“ Noting the benefits when collaborating with other developers, he adds: Graduating from being the sole architect of your codebase to contributing to a shared codebase is a huge leap for a developer. Perhaps a practical way for people who tend to work on their own to do that would be to submit a pull request or a patch to an open source project or plugin.” Richard Rutter of Clearleft sees clear advantages for the client, too. He recommends using source control “preferably in some sort of collaborative environment that you can open up or hand over to the client” – a feature found with hosted services such as GitHub. If you’d like to hone your Git skills, Emma Jane Westby wrote Git for Grown-ups in last year’s 24 ways. Don’t repeat, automate! Tim Kadlec is a big proponent of automating your build process: I’ve been hammering that home to every client I’ve had this year. It’s amazing how many companies don’t really have a formal build/deployment process in place. So many issues on the web (performance, accessibility, etc.) can be greatly improved just by having a layer of automation involved. For example, graphic editing software spits out ridiculously bloated images. Very frequently, that’s what ends up getting put on a site. If you have a build process, you can have the compression automated and start seeing immediate gains for no effort. On a recent project, they were able to shave around 1.5MB from their site weight simply by automating compression. Once you have your code in source control, some of that automation can be made easier. Brian Suda writes: We have a few bash scripts that run on git commit: they compile the less, jslint and remove white-space, basically the 3 Cs, Compress, Concatenate, Combine. This is now part of our workflow without even realising it. One great way to get started with a build process is to use a tool like Grunt, and a great way to get started with Grunt is to read Chris Coyier’s Grunt for People Who Think Things Like Grunt are Weird and Hard. Tim reinforces: Issues like [image compression] — or simple accessibility issues like alt tags on images — should never be able to hit a live server. If you can detect it, you can automate it. And if you can automate it, you can free up time for designers and developers to focus on more challenging — and interesting — problems. A clear call to arms to tighten up and formalise development and deployment practices. The less that has to be done manually or is susceptible to change, the less that can go wrong when a site is built and deployed. Any procedures that are automated are no longer dependant on a single person’s knowledge, making it easier to build your team or just cope when someone important is out of the office or leaves. If you’re interested in kicking the FTP habit and automating your site deployments, we have an article later in the series just for you. Build systems, not sites One big theme arising this year was that of building websites as systems, not as individual pages. Brad Frost: For me, teams making websites in 2015 shouldn’t be working on just-another-redesign redesign. People are realizing that in order to make stable, future-friendly, scalable, extensible web experiences they’re going to need to think more systematically. That means crafting deliberate and thoughtful design systems. That means establishing front-end style guides. That means killing the out-dated, siloed, assembly-line waterfall process and getting cross-disciplinary teams working together in meaningful ways. That means treating development as design. That means treating performance as design. That means taking the time out of the day to establish the big picture, rather than aimlessly crawling along quarter by quarter. Designer and developer Jina Bolton also advocates the use of style guides, and recommends making the guide a project deliverable: Consider adding on a style guide/UI library to your project as a deliverable for maintainability and thinking through all UI elements and components. Val Head agrees: “build and maintain a style guide for each project” she wrote. On the subject of approaching a redesign, she added: A UI inventory goes a long way to helping get your head around what a design system needs in the early stages of a redesign project. So what about that old chestnut, responsive web design? Should we be making sites responsive by default? How about mobile first? Richard Rutter: Think mobile first unless you have a very good reason not to. Remember to take the client with you on this principle, otherwise it won’t work as a convincing piece of design. Trent Walton adds: The more you can test and sort of skew your perception for what is typical on the web, the better. 4k displays hooked up to 100Mbps connections can make one extremely unsympathetic. The value of testing with real devices is something Ruth John appreciates. She wrote: I still have my own small device lab at home, even though I work permanently for a well-established company (which has a LOT of devices at its disposal) – it just means I can get a good overview of how things are looking during development. And speaking of systems, Mark Norman Francis recommends the use of measuring tools to aid the design process; “[U]se analytics and make decisions from actual data” he suggests, rather than relying totally on intuition. Tim Kadlec adds a word on performance planning: I think having a performance budget in place should now be a given on any project. We’ve proven pretty conclusively through a hundred and one case studies that performance matters. And over the last year or so, we’ve really seen a lot of great tools emerge to help track and enforce performance budgets. There’s not really a good excuse for not using one any more. It’s clear that in the four years since Ethan Marcotte’s Responsive Web Design article the diversity of screen sizes, network connection speeds and input methods has only increased. New web projects should presume visitors will be using anything from a watch up to a big screen desktop display, and from being offline, through to GPRS, 3G and fast broadband. Will it take more time to design and build for those constraints? Yes, it most likely will. If Internet Explorer is brave enough to ask to be your default browser, you can be brave enough to tell your client they need to build responsively. Working collaboratively A big part of delivering a successful website project is how we work together, both as a design team and a wider project team with the client. Val Head recommends an open line of communication: Keep conversations going. With clients, with teammates. Talking is so important with the way we work now. A good team conversation place, like Slack, is slowly becoming invaluable for me too. Ruth John agrees: We’ve recently opened up our lines of communication by using Slack. It has transformed the way we work. We’re easily more productive and collaborative on projects, as well as making it a lot easier for us all to work remotely (including freelancers). She goes on to point out how tools can be combined to ease team communication without adding further complications: We have a private GitHub organisation (which everyone who works with us is granted access to), which not only holds all our project code but also a team wiki. This has lots of information to get you set up within the team, as well as coding guidelines and best practices and other admin info, like contact numbers/emails for the team. Small-A agile is also the theme of the day, with Mark Norman Francis suggesting an approach of “small iterations with constant feedback around individual features, not spec-it-all-first”. He also encourages you to review as you go, at each stage of the project: Always reflect on what went well and what went badly, and how you can learn from that, even if not Doing Agile™. Ultimately “best practices” should come from learning lessons (both good and bad). Richard Rutter echoes this, warning against working in isolation from the client for too long: Avoid big reveals. Your engagement with the client should be participatory. In business no one likes surprises. This experience rings true for Ruth John who recommends involving real users in the feedback loop, not just the client: We also try and get feedback on what we’re building as soon and as often as we can with our stakeholders/clients and real users. We should also remember that our role is to serve the client’s needs, not just bill them for whatever we can. Brian Suda adds: Don’t sell clients on things they don’t need. We can spout a lot of jargon and scare clients into thinking you are a god. We can do things few can now, but you can’t rip people off because they are unknowledgeable. But do clients know what they’re getting, even when they see it? Trent Walton has an interesting take: We focus on prototypes over image-based comps at all costs, especially when meetings are involved. It’s much easier to assess a prototype, and too often with image-based comps, discussions devolve into how something might feel when actually live, or how a layout could change to fit a given viewport. Val Head also likes to get work into the browser: Sketch design ideas with any software you like, but get to the browser as soon as possible. Beyond your immediate team, Emma Jane Westby has advice for looking further afield: Invest time into building relationships within your (technical) community. You never know when you might be able to lend a hand; or benefit from someone who’s able to lend theirs. And when things don’t go according to plan, Brian Suda has the following advice: If something doesn’t work out, be professional and don’t burn bridges. It will always come back to you. The best work comes from working collaboratively, not just as a team within an agency or department, but with the client and stakeholders too. If doing your job and chucking it over the fence ever worked, it certainly doesn’t fly any more. You can work in isolation, but doing really great work requires collaboration. The business end When you’re building sites professionally, every team member has to think about the business aspects. Estimating time, setting billing rates, and establishing deliverables are all part of the job. In 2008, Andrew Clarke gave us the Contract Killer sample contract we could use to establish a working agreement for a web design project. Richard Rutter agrees that contracts are still an essential part of business: They are there for both parties’ protection. Make sure you know what will happen if you decide you don’t want to work with the client any more (it happens) and, of course, what circumstances mean they can stop taking your services. Having a contract is one thing, but does it adequately protect both you and the client? Emma Jane Westby adds: Find a good IP lawyer/legal counsel. I routinely had an IP lawyer read all of my contracts to find loopholes I wouldn’t have noticed. I didn’t always change the contract, but at least I knew what might come back to bite me. So, you have a contract in place, and know what the project is. Brian Suda recommends keeping track of time and making sure you bill fairly for the hours the project costs you: If I go to a meeting and they are 15 minutes late, the billing clock has already started. They can’t expect me to be in the 1h meeting and not bill for the extra 15–30 minutes they wasted. It goes both ways too. You need to do your best to respect their deadlines and time frame – this is always hard to get right. As ever, it’s good business to do good business. Perhaps we can at last shed the old image of web designers being snowboarding layabouts and demonstrate to clients that we care as much about conducting professional business as they do. Time to review It’s a lot to take in. Some of these ideas and practices will be familiar, others new and yet to be evaluated. The web moves at a fast pace, and we need to be constantly reexamining our tools, techniques and working practices. The most important thing is not to blindly adopt any and all suggestions, but to carefully look at what the benefits might be and decide how they apply to your work. Could you benefit from more formalised development and deployment procedures? Would your design projects run more smoothly and have a longer maintainable life if you approached the solution as a componentised system rather than a series of pages? Are your teams structured in a way that enables the most fluid communication, or are there changes you could make? Are your billing procedures and business agreements serving you and your clients in the best way possible? The new year is a good time to look at your working practices and see what can be improved, and maybe this time next year you’ll look back and think “thank goodness we don’t work like that any more”. 2014 Drew McLellan drewmclellan 2014-12-01T00:00:00+00:00 https://24ways.org/2014/what-it-takes-to-build-a-website/ business
66 Solve the Hard Problems So, here we find ourselves on the cusp of 2016. We’ve had a good year – the web is still alive, no one has switched it off yet. Clients still have websites, teenagers still have phone apps, and there continue to be plenty of online brands to meaningfully engage with each day. Good job team, high fives all round. As it’s the time to make resolutions, I wanted to share three small ideas to take into the new year. Get good at what you do “How do you get to Carnegie Hall?” the old joke goes. “Practise, practise, practise.” We work in an industry where there is an awful lot to learn. There’s a lot to learn to get started and then once you do, there’s a lot more to learn to keep your skills current. Just when you think you’ve mastered something, it changes. This is true of many industries, of course, but the sheer pace of change for us makes learning not an annual activity, but daily. Learning takes time, and while I’m not convinced that every skill takes the fabled ten thousand hours to master, there is certainly no escaping that to remain current we must reinvest time in keeping our skills up to date. Picking where to spend your time One of the hardest aspects of this thing of ours is just choosing what to learn. If you, like me, invested any time in learning the Less CSS preprocessor over the last few years, you’ll probably now be spending your time relearning Sass instead. If you spent time learning Grunt, chances are you’ll now be thinking about whether you should switch to Gulp. It’s not just that there are new types of tools, there are new tools and frameworks to do the things you’re already doing, but, well, differently. Deciding what to learn is hard and the costs of backing the wrong horse can seriously mount up; so much so that by the time you’ve learned and then relearned the tools everyone says you need for your job, there’s rarely enough time to spend really getting to know how best to use them.  Practise, practise, practise Do you know how you don’t get to Carnegie Hall? By learning a new instrument each week. It takes time and experience to really learn something well. That goes for a new JavaScript framework as much as a violin. If you flit from one shiny new thing to another, you’re destined to produce amateurish work forever. Learn the new thing, but then stick with it long enough to get really good at it – even if Twitter trolls try to convince you it’s not cool. What’s really not cool is living as a forevernoob. If you’re still not sure what to learn, go back to basics. Considering a new CSS or JavaScript framework? Invest that time in learning the underlying CSS or JavaScript really well instead. Those skills will stand the test of time. Audience and purpose Back when I was in school, my English teacher (a nice Welsh lady, who I appreciate more now than I did back then) used to love to remind us that every piece of writing should have an audience and a purpose. So much so that audience and purpose almost became her catch phrase. For every essay, article or letter, we were reminded to consider who we were writing it for and what we were trying to achieve. It’s something I think about a lot; certainly when writing, but also in almost every other creative endeavour. Asking who is this for and what am I trying to achieve applies equally to designing a logo or website, through to composing music or writing software. Being productive It seems like everyone wants to have a product these days. As someone who used to do client services work and now has a product company, I often talk with people who are interested in taking something they’ve built in-house and turning it into a product. You know the sort of thing: a design agency with its own CMS or project management web app; the very logical thought process of: if this helps our business, maybe others will find it valuable too; the question that inevitably follows: could we turn this into a product? Whether consciously or not, the audience and purpose influence nearly every aspect of your creative process. Once written or designed or developed or created, revising a work to change the audience and purpose can be quite a challenge. No matter how much you want to turn the tension-building, atmospheric music for a horror film into a catchy chart hit, it’s going to be a struggle. Yes, it’s music, but that’s neither the audience nor purpose for which it was created. The same is absolutely true for your in-house tools – those were also designed for a specific audience and purpose. Your in-house CMS would have been designed with an audience of your own development team, who are busy implementing sites for clients. The purpose is to make that team more productive overall, taking into account considerations of maintaining multiple sites on a common codebase, training clients, a more mature and stable platform and all the other benefits of reusing the same code for each project. The audience is your team and the purpose increased productivity. That’s very different from a customer who wants to buy a polished system to use off-the-shelf. If their needs perfectly aligned with yours then they wouldn’t be in the market for your product – they would have built their own. Sometimes you hear the advice to “scratch your own itch” when it comes to product design. I don’t completely agree. Got an itch? Great. Find other itchy people and sell them a backscratcher. Building a product, like designing a website, is a lot of work. It requires knowing your audience and purpose inside out. You can’t fudge it and you can’t just hope you’ll find an audience for some old thing you have lying around. Always consider the audience and purpose for everything you create. It’s often the difference between success and failure. Solve the hard problems Human beings have a natural tendency to avoid hard problems. In digital design (websites, software, whatever) the received wisdom is often that we can get 80% of the way towards doing the hard thing by doing something that’s not very hard. Do you know what you get at the end of it? Paid. But nothing really great ever happens that way. I worked on a client project a while back where one of the big challenges was making full use of the massive image library they had built up over the years. The client had tens of thousands of photographs, along with a fair amount of video and a large MP3 audio library too. If it wasn’t managed carefully, storage sizes would get out of control, content would go unattributed, and everything would get very messy very quickly. I could tell from the outset that this aspect of the project was going to be a constant problem. So we tackled it head-on. We designed and built a media management system to hold and process all the assets, and added an API so the content management system could talk to it. Every time the site needed a photo at a new size, it made an API request to the system and everything was handled seamlessly. It was a daunting job to invest all the time and effort in building that dedicated system and API, but it really paid off. Instead of having the constant troubles of a vast library of media, it became one of the strongest parts of the project. Turn your hardest problems into your biggest strengths There’s a funny thing about hard problems. The hardest problems are the most fun to solve and have the biggest impact. Maybe you’re the sort of person who clocks in for work, does their job and clocks out at 5pm without another thought. But I don’t think you are, because you’re here reading this. If you really love what you do, I don’t think you can be satisfied in your work unless you’re seeking out and working on those hard problems. That’s where the magic is. The new year is a helpful time to think about breaking bad habits. Whether it’s smoking a bit less, or going to the gym a bit more, the ticking over of the calendar can provide the motivation for a new start. I have some suggestions for you. Get good at what you do. Practise your skills and don’t just flit from one shiny thing to the next. Remember who you’re doing it for and why. Consider the audience and purpose for everything you create. Solve the hard problems. It’s more interesting, more satisfying, and has a greater impact. As we move into 2016, these are the things I’m going to continue to work on. Maybe you’d like to join me. 2015 Drew McLellan drewmclellan 2015-12-24T00:00:00+00:00 https://24ways.org/2015/solve-the-hard-problems/ process
80 HTML5 Video Bumpers Video is a bigger part of the web experience than ever before. With native browser support for HTML5 video elements freeing us from the tyranny of plugins, and the availability of faster internet connections to the workplace, home and mobile networks, it’s now pretty straightforward to publish video in a way that can be consumed in all sorts of ways on all sorts of different web devices. I recently worked on a project where the client had shot some dedicated video shorts to publish on their site. They also had some five-second motion graphics produced to top and tail the videos with context and branding. This pretty common requirement is a great idea on the web, where a user might land at your video having followed a link and be viewing a page without much context. Known as bumpers, these short introduction clips help brand a video and make it look a lot more professional. Adding bumpers to a video The simplest way to add bumpers to a video would be to edit them on to the start and end of the video file itself. Cooking the bumpers into the video file is easy, but should you ever want to update them it can become a real headache. If the branding needs updating, for example, you’d need to re-edit and re-encode all your videos. Not a fun task. What if the bumpers could be added dynamically? That would enable you to use the same bumper for multiple videos (decreasing download time for users who might watch more than one) and to update the bumpers whenever you wanted. You could change them seasonally, update them for special promotions, run different advertising slots, perform multivariate testing, or even target different bumpers to different users. The trade-off, of course, is that if you dynamically add your bumpers, there’s a chance that a user in a given circumstance might not see the bumper. For example, if the main video feature was uploaded to YouTube, you’d have no way to control the playback. As always, you need to weigh up the pros and cons and make your choice. HTML5 bumpers If you wanted to dynamically add bumpers to your HTML5 video, how would you go about it? That was the question I found myself needing to answer for this particular client project. My initial thought was to treat it just like an image slideshow. If I were building a slideshow that moved between images, I’d use CSS absolute positioning with z-index to stack the images up on top of each other in a pile, with the first image on top. To transition to the second image, I’d use JavaScript to fade the top image out, revealing the second image beneath it. Now that video is just a native object in the DOM, just like an image, why not do the same? Stack the videos up with the opening bumper on top, listen for the video’s onended event, and fade it out to reveal the main feature behind. Good idea, right? Wrong Remember that this is the web. It’s never going to be that easy. The problem here is that many non-desktop devices use native, dedicated video players. Think about watching a video on a mobile phone – when you play the video, the phone often goes full-screen in its native player, leaving the web page behind. There’s no opportunity to fade or switch z-index, as the video isn’t being viewed in the page. Your page is left powerless. Powerless! So what can we do? What can we control? Those of us with particularly long memories might recall a time before CSS, when we’d have to use JavaScript to perform image rollovers. As CSS background images weren’t a practical reality, we would use lots of <img> elements, and perform a rollover by modifying the src attribute of the image. Turns out, this old trick of modifying the source can help us out with video, too. In most cases, modifying the src attribute of a <video> element, or perhaps more likely the src attribute of a source element, will swap from one video to another. Swappin’ it Let’s take a deliberately simple example of a super-basic video tag: <video src="mycat.webm" controls>no fallback coz i is lame, innit.</video> We could very simply write a script to find all video tags and give them a new src to show our bumper. <script> var videos, i, l; videos = document.getElementsByTagName('video'); for(i=0, l=videos.length; i<l; i++) { videos[i].setAttribute('src', 'bumper-in.webm'); } </script> View the example in a browser with WebM support. You’ll see that the video is swapped out for the opening bumper. Great! Beefing it up Of course, we can’t just publish video in one format. In practical use, you need a <video> element with multiple <source> elements containing your different source formats. <video controls> <source src="mycat.mp4" type="video/mp4" /> <source src="mycat.webm" type="video/webm" /> <source src="mycat.ogv" type="video/ogg" /> </video> This time, our script needs to loop through the sources, not the videos. We’ll use a regular expression replacement to swap out the file name while maintaining the correct file extension. <script> var sources, i, l, orig; sources = document.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('src'); sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2')); // reload the video sources[i].parentNode.load(); } </script> The difference this time is that when changing the src of a <source> we need to call the .load() method on the video to get it to acknowledge the change. See the code in action, this time in a wider range of browsers. But, my video! I guess we should get the original video playing again. Keeping the same markup, we need to modify the script to do two things: Store the original src in a data- attribute so we can access it later Add an event listener so we can detect the end of the bumper playing, and load the original video back in As we need to loop through the videos this time to add the event listener, I’ve moved the .load() call into that loop. It’s a bit more efficient to call it only once after modifying all the video’s sources. <script> var videos, sources, i, l, orig; sources = document.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('src'); sources[i].setAttribute('data-orig', orig); sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2')); } videos = document.getElementsByTagName('video'); for(i=0, l=videos.length; i<l; i++) { videos[i].load(); videos[i].addEventListener('ended', function(){ sources = this.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('data-orig'); if (orig) { sources[i].setAttribute('src', orig); } sources[i].setAttribute('data-orig',''); } this.load(); this.play(); }); } </script> Again, view the example to see the bumper play, followed by our spectacular main feature. (That’s my cat, Widget. His interests include sleeping and internet marketing.) Tidying things up The final thing to do is add our closing bumper after the main video has played. This involves the following changes: We need to keep track of whether the src has been changed, so we only play the video if it’s changed. I’ve added the modified variable to track this, and it stops us getting into a situation where the video just loops forever. Add an else to the event listener, for when the orig is false (so the main feature has been playing) to load in the end bumper. We also check that we’re not already playing the end bumper. Because looping. <script> var videos, sources, i, l, orig, current, modified; sources = document.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('src'); sources[i].setAttribute('data-orig', orig); sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2')); } videos = document.getElementsByTagName('video'); for(i=0, l=videos.length; i<l; i++) { videos[i].load(); modified = false; videos[i].addEventListener('ended', function(){ sources = this.getElementsByTagName('source'); for(i=0, l=sources.length; i<l; i++) { orig = sources[i].getAttribute('data-orig'); if (orig) { sources[i].setAttribute('src', orig); modified = true; }else{ current = sources[i].getAttribute('src'); if (current.indexOf('bumper-out')==-1) { sources[i].setAttribute('src', current.replace(/([w]+).(w+)/, 'bumper-out.$2')); modified = true; }else{ this.pause(); modified = false; } } sources[i].setAttribute('data-orig',''); } if (modified) { this.load(); this.play(); } }); } </script> Yo ho ho, that’s a lot of JavaScript. See it in action – you should get a bumper, the cat video, and an end bumper. Of course, this code works fine for demonstrating the principle, but it’s very procedural. Nothing wrong with that, but to do something similar in production, you’d probably want to make the code more modular to ease maintainability. Besides, you may want to use a framework, rather than basic JavaScript. The end credits One really important principle here is that of progressive enhancement. If the browser doesn’t support JavaScript, the user won’t see your bumper, but they will get the main video. If the browser supports JavaScript but doesn’t allow you to modify the src (as was the case with older versions of iOS), the user won’t see your bumper, but they will get the main video. If a search engine or social media bot grabs your page and looks for content, they won’t see your bumper, but they will get the main video – which is absolutely what you want. This means that if the bumper is absolutely crucial, you may still need to cook it into the video. However, for many applications, running it dynamically can work quite well. As always, it comes down to three things: Measure your audience: know how people access your site Test the solution: make sure it works for your audience Plan for failure: it’s the web and that’s how things work ‘round these parts But most of all play around with it, have fun and build something awesome. 2012 Drew McLellan drewmclellan 2012-12-01T00:00:00+00:00 https://24ways.org/2012/html5-video-bumpers/ code
101 Easing The Path from Design to Development As a web developer, I have the pleasure of working with a lot of different designers. There has been a lot of industry discussion of late about designers and developers, focusing on how different we sometimes are and how the interface between our respective phases of a project (that is to say moving from a design phase into production) can sometimes become a battleground. I don’t believe it has to be a battleground. It’s actually more like being a dance partner – our steps are different, but as long as we know our own part and have a little knowledge of our partner’s steps, it all goes together to form a cohesive dance. Albeit with less spandex and fewer sequins (although that may depend on the project in question). As the process usually flows from design towards development, it’s most important that designers have a little knowledge of how the site is going to be built. At the specialist web development agency I’m part of, we find that designs that have been well considered from a technical perspective help to keep the project on track and on budget. Based on that experience, I’ve put together my checklist of things that designers should consider before handing their work over to a developer to build. Layout One rookie mistake made by traditionally trained designers transferring to the web is to forget a web browser is not a fixed medium. Unlike designing a magazine layout or a piece of packaging, there are lots of available options to consider. Should the layout be fluid and resize with the window, or should it be set to a fixed width? If it’s fluid, which parts expand and which not? If it’s fixed, should it sit in the middle of the window or to one side? If any part of the layout is going to be flexible (get wider and narrower as required), consider how any graphics are affected. Images don’t usually look good if displayed at anything other that their original size, so should they behave? If a column is going to get wider than it’s shown in the Photoshop comp, it may be necessary to provide separate wider versions of any background images. Text size and content volume A related issue is considering how the layout behaves with both different sizes of text and different volumes of content. Whilst text zooming rather than text resizing is becoming more commonplace as the default behaviour in browsers, it’s still a fundamentally important principal of web design that we are suggesting and not dictating how something should look. Designs must allow for a little give and take in the text size, and how this affects the design needs to be taken into consideration. Keep in mind that the same font can display differently in different places and platforms. Something as simple as Times will display wider on a Mac than on Windows. However, the main impact of text resizing is the change in how much vertical space copy takes up. This is particularly important where space is limited by the design (making text bigger causes many more problems than making text smaller). Each element from headings to box-outs to navigation items and buttons needs to be able to expand at least vertically, if not horizontally as well. This may require some thought about how elements on the page may wrap onto new lines, as well as again making sure to provide extended versions of any graphical elements. Similarly, it’s rare theses days to know exactly what content you’re working with when a site is designed. Many, if not most sites are designed as a series of templates for some kind of content management system, and so designs cannot be tweaked around any specific item of content. Designs must be able to cope with both much greater and much lesser volumes of content that might be thrown in at the lorem ipsum phase. Particular things to watch out for are things like headings (how do they wrap onto multiple lines) and any user-generated items like usernames. It can be very easy to forget that whilst you might expect something like a username to be 8-12 characters, if the systems powering your site allow for 255 characters they’ll always be someone who’ll go there. Expect them to do so. Again, if your site is content managed or not, consider the possibility that the structure might be expanded in the future. Consider how additional items might be added to each level of navigation. Whilst it’s rarely desirable to make significant changes without revisiting the site’s information architecture more thoroughly, it’s an inevitable fact of life that the structure needs a little bit of flexibility to change over time. Interactions with and without JavaScript A great number of sites now make good use of JavaScript to streamline the user interface and make everything just that touch more usable. Remember, though, that any developer worth their salt will start by building the interface without JavaScript, get it all working, and then layer that JavaScript on top. This is to allow for users viewing the site without JavaScript available or enabled in their browser. Designers need to consider both states of any feature they’re designing – how it looks and functions with and without JavaScript. If the feature does something fancy with Ajax, consider how the same can be achieved with basic HTML forms, links and intermediary pages. These all need to be designed, because this is how some of your users will interact with the site. Logged in and logged out states When designing any type of web application or site that has a membership system – that is to say users can create an account and log into the site – the design will need to consider how any element is presented in both logged in and logged out states. For some items there’ll be no difference, whereas for others there may be considerable differences. Should an item be hidden completely not logged out users? Should it look different in some way? Perhaps it should look the same, but prompt the user to log in when they interact with it. If so, what form should that prompt take on and how does the user progress through the authentication process to arrive back at the task they were originally trying to complete? Couple logged in and logged out states with the possible absence of JavaScript, and every feature needs to be designed in four different states: Logged out with JavaScript available Logged in with JavaScript available Logged out without JavaScript available Logged in without JavaScript available Fonts There are three main causes of war in this world; religions, politics and fonts. I’ve said publicly before that I believe the responsibility for this falls squarely at the feet of Adobe Photoshop. Photoshop, like a mistress at a brothel, parades a vast array of ropey, yet strangely enticing typefaces past the eyes of weak, lily-livered designers, who can’t help but crumble to their curvy charms. Yet, on the web, we have to be a little more restrained in our choice of typefaces. The purest solution is always to make the best use of the available fonts, but this isn’t always the most desirable solution from a design point of view. There are several technical solutions such as techniques that utilise Flash (like sIFR), dynamically generated images and even canvas in newer browsers. Discuss the best approach with your developer, as every different technique has different trade-offs, and this may impact the design in other ways. Messaging Any site that has interactive elements, from a simple contact form through to fully featured online software application, involves some kind of user messaging. By this I mean the error messages when something goes wrong and the success and thank-you messages when something goes right. These typically appear as the result of an interaction, so are easy to forget and miss off a Photoshop comp. For every form, consider what gets displayed to the user if they make a mistake or miss something out, and also what gets displayed back when the interaction is successful. What do they see and where do the go next? With Ajax interactions, the user doesn’t get any visual feedback of the site waiting for a response from the server unless you design it that way. Consider using a ‘waiting’ or ‘in progress’ spinner to give the user some visual feedback of any background processes. How should these look? How do they animate? Similarly, also consider the big error pages like a 404. With luck, these won’t often be seen, but it’s at the point that they are when careful design matters the most. Form fields Depending on the visual style of your site, the look of a browser’s default form fields and buttons can sometimes jar. It’s understandable that many a designer wants to change the way they look. Depending on the browser in question, various things can be done to style form fields and their buttons (although it’s not as flexible as most would like). A common request is to replace the default buttons with a graphical button. This is usually achievable in most cases, although it’s not easy to get a consistent result across all browsers – particularly when it comes to vertical positioning and the space surrounding the button. If the layout is very precise, or if space is at a premium, it’s always best to try and live with the browser’s default form controls. Whichever way you go, it’s important to remember that in general, each form field should have a label, and each form should have a submit button. If you find that your form breaks either of those rules, you should double check. Practical tips for handing files over There are a couple of basic steps that a design can carry out to make sure that the developer has the best chance of implementing the design exactly as envisioned. If working with Photoshop of Fireworks or similar comping tool, it helps to group and label layers to make it easy for a developer to see which need to be turned on and off to get to isolate parts of the page and different states of the design. Also, if you don’t work in the same office as your developer (and so they can’t quickly check with you), provide a PDF of each page and state so that your developer can see how each page should look aside from any confusion with quick layers are switched on or off. These also act as a handy quick reference that can be used without firing up Photoshop (which can kill both productivity and your machine). Finally, provide a colour reference showing the RGB values of all the key colours used throughout the design. Without this, the developer will end up colour-picking from the comps, and could potentially end up with different colours to those you intended. Remember, for a lot of developers, working in a tool like Photoshop is like presenting a designer with an SSH terminal into a web server. It’s unfamiliar ground and easy to get things wrong. Be the expert of your own domain and help your colleagues out when they’re out of their comfort zone. That goes both ways. In conclusion When asked the question of how to smooth hand-over between design and development, almost everyone who has experienced this situation could come up with their own list. This one is mine, based on some of the more common experiences we have at edgeofmyseat.com. So in bullet point form, here’s my checklist for handing a design over. Is the layout fixed, or fluid? Does each element cope with expanding for larger text and more content? Are all the graphics large enough to cope with an area expanding? Does each interactive element have a state for with and without JavaScript? Does each element have a state for logged in and logged out users? How are any custom fonts being displayed? (and does the developer have the font to use?) Does each interactive element have error and success messages designed? Do all form fields have a label and each form a submit button? Is your Photoshop comp document well organised? Have you provided flat PDFs of each state? Have you provided a colour reference? Are we having fun yet? 2008 Drew McLellan drewmclellan 2008-12-01T00:00:00+00:00 https://24ways.org/2008/easing-the-path-from-design-to-development/ process
124 Writing Responsible JavaScript Without a doubt, JavaScript has been making something of a comeback in the last year. If you’re involved in client-side development in any way at all, chances are that you’re finding yourself writing more JavaScript now than you have in a long time. If you learned most of your JavaScript back when DHTML was all the rage and before DOM Scripting was in vogue, there have been some big shifts in the way scripts are written. Most of these are in the way event handlers are assigned and functions declared. Both of these changes are driven by the desire to write scripts that are responsible page citizens, both in not tying behaviour to content and in taking care not to conflict with other scripts. I thought it may be useful to look at some of these more responsible approaches to learn how to best write scripts that are independent of the page content and are safely portable between different applications. Event Handling Back in the heady days of Web 1.0, if you wanted to have an object on the page react to something like a click, you would simply go ahead and attach an onclick attribute. This was easy and understandable, but much like the font tag or the style attribute, it has the downside of mixing behaviour or presentation in with our content. As we’re learned with CSS, there are big benefits in keeping those layers separate. Hey, if it works for CSS, it should work for JavaScript too. Just like with CSS, instead of adding an attribute to our element within the document, the more responsible way to do that is to look for the item from your script (like CSS does with a selector) and then assign the behaviour to it. To give an example, take this oldskool onclick use case: <a id="anim-link" href="#" onclick="playAnimation()">Play the animation</a> This could be rewritten by removing the onclick attribute, and instead doing the following from within your JavaScript. document.getElementById('anim-link').onclick = playAnimation; It’s all in the timing Of course, it’s never quite that easy. To be able to attach that onclick, the element you’re targeting has to exist in the page, and the page has to have finished loading for the DOM to be available. This is where the onload event is handy, as it fires once everything has finished loading. Common practise is to have a function called something like init() (short for initialise) that sets up all these event handlers as soon as the page is ready. Back in the day we would have used the onload attibute on the <body> element to do this, but of course what we really want is: window.onload = init; As an interesting side note, we’re using init here rather than init() so that the function is assigned to the event. If we used the parentheses, the init function would have been run at that moment, and the result of running the function (rather than the function itself) would be assigned to the event. Subtle, but important. As is becoming apparent, nothing is ever simple, and we can’t just go around assigning our initialisation function to window.onload. What if we’re using other scripts in the page that might also want to listen out for that event? Whichever script got there last would overwrite everything that came before it. To manage this, we need a script that checks for any existing event handlers, and adds the new handler to it. Most of the JavaScript libraries have their own systems for doing this. If you’re not using a library, Simon Willison has a good stand-alone example function addLoadEvent(func) { var oldonload = window.onload; if (typeof window.onload != 'function') { window.onload = func; } else { window.onload = function() { if (oldonload) { oldonload(); } func(); } } } Obviously this is just a toe in the events model’s complex waters. Some good further reading is PPK’s Introduction to Events. Carving out your own space Another problem that rears its ugly head when combining multiple scripts on a single page is that of making sure that the scripts don’t conflict. One big part of that is ensuring that no two scripts are trying to create functions or variables with the same names. Reusing a name in JavaScript just over-writes whatever was there before it. When you create a function in JavaScript, you’ll be familiar with doing something like this. function foo() { ... goodness ... } This is actually just creating a variable called foo and assigning a function to it. It’s essentially the same as the following. var foo = function() { ... goodness ... } This name foo is by default created in what’s known as the ‘global namespace’ – the general pool of variables within the page. You can quickly see that if two scripts use foo as a name, they will conflict because they’re both creating those variables in the global namespace. A good solution to this problem is to add just one name into the global namespace, make that one item either a function or an object, and then add everything else you need inside that. This takes advantage of JavaScript’s variable scoping to contain you mess and stop it interfering with anyone else. Creating An Object Say I was wanting to write a bunch of functions specifically for using on a site called ‘Foo Online’. I’d want to create my own object with a name I think is likely to be unique to me. var FOOONLINE = {}; We can then start assigning functions are variables to it like so: FOOONLINE.message = 'Merry Christmas!'; FOOONLINE.showMessage = function() { alert(this.message); }; Calling FOOONLINE.showMessage() in this example would alert out our seasonal greeting. The exact same thing could also be expressed in the following way, using the object literal syntax. var FOOONLINE = { message: 'Merry Christmas!', showMessage: function() { alert(this.message); } }; Creating A Function to Create An Object We can extend this idea bit further by using a function that we run in place to return an object. The end result is the same, but this time we can use closures to give us something like private methods and properties of our object. var FOOONLINE = function(){ var message = 'Merry Christmas!'; return { showMessage: function(){ alert(message); } } }(); There are two important things to note here. The first is the parentheses at the end of line 10. Just as we saw earlier, this runs the function in place and causes its result to be assigned. In this case the result of our function is the object that is returned at line 4. The second important thing to note is the use of the var keyword on line 2. This ensures that the message variable is created inside the scope of the function and not in the global namespace. Because of the way closure works (which if you’re not familiar with, just suspend your disbelief for a moment) that message variable is visible to everything inside the function but not outside. Trying to read FOOONLINE.message from the page would return undefined. This is useful for simulating the concept of private class methods and properties that exist in other programming languages. I like to take the approach of making everything private unless I know it’s going to be needed from outside, as it makes the interface into your code a lot clearer for someone else to read. All Change, Please So that was just a whistle-stop tour of a couple of the bigger changes that can help to make your scripts better page citizens. I hope it makes useful Sunday reading, but obviously this is only the tip of the iceberg when it comes to designing modular, reusable code. For some, this is all familiar ground already. If that’s the case, I encourage you to perhaps submit a comment with any useful resources you’ve found that might help others get up to speed. Ultimately it’s in all of our interests to make sure that all our JavaScript interoperates well – share your tips. 2006 Drew McLellan drewmclellan 2006-12-10T00:00:00+00:00 https://24ways.org/2006/writing-responsible-javascript/ code
132 Tasty Text Trimmer In most cases, when designing a user interface it’s best to make a decision about how data is best displayed and stick with it. Failing to make a decision ultimately leads to too many user options, which in turn can be taxing on the poor old user. Under some circumstances, however, it’s good to give the user freedom in customising their workspace. One good example of this is the ‘Article Length’ tool in Apple’s Safari RSS reader. Sliding a slider left of right dynamically changes the length of each article shown. It’s that kind of awesomey magic stuff that’s enough to keep you from sleeping. Let’s build one. The Setup Let’s take a page that has lots of long text items, a bit like a news page or like Safari’s RSS items view. If we were to attach a class name to each element we wanted to resize, that would give us something to hook onto from the JavaScript. Example 1: The basic page. As you can see, I’ve wrapped my items in a DIV and added a class name of chunk to them. It’s these chunks that we’ll be finding with the JavaScript. Speaking of which … Our Core Functions There are two main tasks that need performing in our script. The first is to find the chunks we’re going to be resizing and store their original contents away somewhere safe. We’ll need this so that if we trim the text down we’ll know what it was if the user decides they want it back again. We’ll call this loadChunks. var loadChunks = function(){ var everything = document.getElementsByTagName('*'); var i, l; chunks = []; for (i=0, l=everything.length; i<l; i++){ if (everything[i].className.indexOf(chunkClass) > -1){ chunks.push({ ref: everything[i], original: everything[i].innerHTML }); } } }; The variable chunks is stored outside of this function so that we can access it from our next core function, which is doTrim. var doTrim = function(interval) { if (!chunks) loadChunks(); var i, l; for (i=0, l=chunks.length; i<l; i++){ var a = chunks[i].original.split(' '); a = a.slice(0, interval); chunks[i].ref.innerHTML = a.join(' '); } }; The first thing that needs to be done is to call loadChunks if the chunks variable isn’t set. This should only happen the first time doTrim is called, as from that point the chunks will be loaded. Then all we do is loop through the chunks and trim them. The trimming itself (lines 6-8) is very simple. We split the text into an array of words (line 6), then select only a portion from the beginning of the array up until the number we want (line 7). Finally the words are glued back together (line 8). In essense, that’s it, but it leaves us needing to know how to get the number into this function in the first place, and how that number is generated by the user. Let’s look at the latter case first. The YUI Slider Widget There are lots of JavaScript libraries available at the moment. A fair few of those are really good. I use the Yahoo! User Interface Library professionally, but have only recently played with their pre-build slider widget. Turns out, it’s pretty good and perfect for this task. Once you have the library files linked in (check the docs linked above) it’s fairly straightforward to create yourself a slider. slider = YAHOO.widget.Slider.getHorizSlider("sliderbg", "sliderthumb", 0, 100, 5); slider.setValue(50); slider.subscribe("change", doTrim); All that’s needed then is some CSS to make the slider look like a slider, and of course a few bits of HTML. We’ll see those later. See It Working! Rather than spell out all the nuts and bolts of implementing this fairly simple script, let’s just look at in it action and then pick on some interesting bits I’ve added. Example 2: Try the Tasty Text Trimmer. At the top of the JavaScript file I’ve added a small number of settings. var chunkClass = 'chunk'; var minValue = 10; var maxValue = 100; var multiplier = 5; Obvious candidates for configuration are the class name used to find the chunks, and also the some minimum and maximum values. The minValue is the fewest number of words we wish to display when the slider is all the way down. The maxValue is the length of the slider, in this case 100. Moving the slider makes a call to our doTrim function with the current value of the slider. For a slider 100 pixels long, this is going to be in the range of 0-100. That might be okay for some things, but for longer items of text you’ll want to allow for displaying more than 100 words. I’ve accounted for this by adding in a multiplier – in my code I’m multiplying the value by 5, so a slider value of 50 shows 250 words. You’ll probably want to tailor the multiplier to the type of content you’re using. Keeping it Accessible This effect isn’t something we can really achieve without JavaScript, but even so we must make sure that this functionality has no adverse impact on the page when JavaScript isn’t available. This is achieved by adding the slider markup to the page from within the insertSliderHTML function. var insertSliderHTML = function(){ var s = '<a id="slider-less" href="#less"><img src="icon_min.gif" width="10" height="10" alt="Less text" class="first" /></a>'; s +=' <div id="sliderbg"><div id="sliderthumb"><img src="sliderthumbimg.gif" /></div></div>'; s +=' <a id="slider-more" href="#more"><img src="icon_max.gif" width="10" height="10" alt="More text" /></a>'; document.getElementById('slider').innerHTML = s; } The other important factor to consider is that a slider can be tricky to use unless you have good eyesight and pretty well controlled motor skills. Therefore we should provide a method of changing the value by the keyboard. I’ve done this by making the icons at either end of the slider links so they can be tabbed to. Clicking on either icon fires the appropriate JavaScript function to show more or less of the text. In Conclusion The upshot of all this is that without JavaScript the page just shows all the text as it normally would. With JavaScript we have a slider for trimming the text excepts that can be controlled with the mouse or with a keyboard. If you’re like me and have just scrolled to the bottom to find the working demo, here it is again: Try the Tasty Text Trimmer Trimmer for Christmas? Don’t say I never give you anything! 2006 Drew McLellan drewmclellan 2006-12-01T00:00:00+00:00 https://24ways.org/2006/tasty-text-trimmer/ code
165 Transparent PNGs in Internet Explorer 6 Newer breeds of browser such as Firefox and Safari have offered support for PNG images with full alpha channel transparency for a few years. With the use of hacks, support has been available in Internet Explorer 5.5 and 6, but the hacks are non-ideal and have been tricky to use. With IE7 winning masses of users from earlier versions over the last year, full PNG alpha-channel transparency is becoming more of a reality for day-to-day use. However, there are still numbers of IE6 users out there who we can’t leave out in the cold this Christmas, so in this article I’m going to look what we can do to support IE6 users whilst taking full advantage of transparency for the majority of a site’s visitors. So what’s alpha channel transparency? Cast your minds back to the Ghost of Christmas Past, the humble GIF. Images in GIF format offer transparency, but that transparency is either on or off for any given pixel. Each pixel’s either fully transparent, or a solid colour. In GIF, transparency is effectively just a special colour you can chose for a pixel. The PNG format tackles the problem rather differently. As well as having any colour you chose, each pixel also carries a separate channel of information detailing how transparent it is. This alpha channel enables a pixel to be fully transparent, fully opaque, or critically, any step in between. This enables designers to produce images that can have, for example, soft edges without any of the ‘halo effect’ traditionally associated with GIF transparency. If you’ve ever worked on a site that has different colour schemes and therefore requires multiple versions of each graphic against a different colour, you’ll immediately see the benefit. What’s perhaps more interesting than that, however, is the extra creative freedom this gives designers in creating beautiful sites that can remain web-like in their ability to adjust, scale and reflow. The Internet Explorer problem Up until IE7, there has been no fully native support for PNG alpha channel transparency in Internet Explorer. However, since IE5.5 there has been some support in the form of proprietary filter called the AlphaImageLoader. Internet Explorer filters can be applied directly in your CSS (for both inline and background images), or by setting the same CSS property with JavaScript. CSS: img { filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(...); } JavaScript: img.style.filter = "progid:DXImageTransform.Microsoft.AlphaImageLoader(...)"; That may sound like a problem solved, but all is not as it may appear. Firstly, as you may realise, there’s no CSS property called filter in the W3C CSS spec. It’s a proprietary extension added by Microsoft that could potentially cause other browsers to reject your entire CSS rule. Secondly, AlphaImageLoader does not magically add full PNG transparency support so that a PNG in the page will just start working. Instead, when applied to an element in the page, it draws a new rendering surface in the same space that element occupies and loads a PNG into it. If that sounds weird, it’s because that’s precisely what it is. However, by and large the result is that PNGs with an alpha channel can be accommodated. The pitfalls So, whilst support for PNG transparency in IE5.5 and 6 is possible, it’s not without its problems. Background images cannot be positioned or repeated The AlphaImageLoader does work for background images, but only for the simplest of cases. If your design requires the image to be tiled (background-repeat) or positioned (background-position) you’re out of luck. The AlphaImageLoader allows you to set a sizingMethod to either crop the image (if necessary) or to scale it to fit. Not massively useful, but something at least. Delayed loading and resource use The AlphaImageLoader can be quite slow to load, and appears to consume more resources than a standard image when applied. Typically, you’d need to add thousands of GIFs or JPEGs to a page before you saw any noticeable impact on the browser, but with the AlphaImageLoader filter applied Internet Explorer can become sluggish after just a handful of alpha channel PNGs. The other noticeable effect is that as more instances of the AlphaImageLoader are applied, the longer it takes to render the PNGs with their transparency. The user sees the PNG load in its original non-supported state (with black or grey areas where transparency should be) before one by one the filter kicks in and makes them properly transparent. Both the issue of sluggish behaviour and delayed load only really manifest themselves with volume and size of image. Use just a couple of instances and it’s fine, but be careful adding more than five or six. As ever, test, test, test. Links become unclickable, forms unfocusable This is a big one. There’s a bug/weirdness with AlphaImageLoader that sometimes prevents interaction with links and forms when a PNG background image is used. This is sometimes reported as a z-index issue, but I don’t believe it is. Rather, it’s an artefact of that weird way the filter gets applied to the document almost outside of the normal render process. Often this can be solved by giving the links or form elements hasLayout using position: relative; where possible. However, this doesn’t always work and the non-interaction problem cannot always be solved. You may find yourself having to go back to the drawing board. Sidestepping the danger zones Frankly, it’s pretty bad news if you design a site, have that design signed off by your client, build it and then find out only at the end (because you don’t know what might trigger a problem) that your search field can’t be focused in IE6. That’s an absolute nightmare, and whilst it’s not likely to happen, it’s possible that it might. It’s happened to me. So what can you do? The best approach I’ve found to this scenario is Isolate the PNG or PNGs that are causing the problem. Step through the PNGs in your page, commenting them out one by one and retesting. Typically it’ll be the nearest PNG to the problem, so try there first. Keep going until you can click your links or focus your form fields. This is where you really need luck on your side, because you’re going to have to fake it. This will depend on the design of the site, but some way or other create a replacement GIF or JPEG image that will give you an acceptable result. Then use conditional comments to serve that image to only users of IE older than version 7. A hack, you say? Well, you started it chum. Applying AlphaImageLoader Because the filter property is invalid CSS, the safest pragmatic approach is to apply it selectively with JavaScript for only Internet Explorer versions 5.5 and 6. This helps ensure that by default you’re serving standard CSS to browsers that support both the CSS and PNG standards correct, and then selectively patching up only the browsers that need it. Several years ago, Aaron Boodman wrote and released a script called sleight for doing just that. However, sleight dealt only with images in the page, and not background images applied with CSS. Building on top of Aaron’s work, I hacked sleight and came up with bgsleight for applying the filter to background images instead. That was in 2003, and over the years I’ve made a couple of improvements here and there to keep it ticking over and to resolve conflicts between sleight and bgsleight when used together. However, with alpha channel PNGs becoming much more widespread, it’s time for a new version. Introducing SuperSleight SuperSleight adds a number of new and useful features that have come from the day-to-day needs of working with PNGs. Works with both inline and background images, replacing the need for both sleight and bgsleight Will automatically apply position: relative to links and form fields if they don’t already have position set. (Can be disabled.) Can be run on the entire document, or just a selected part where you know the PNGs are. This is better for performance. Detects background images set to no-repeat and sets the scaleMode to crop rather than scale. Can be re-applied by any other JavaScript in the page – useful if new content has been loaded by an Ajax request. Download SuperSleight Implementation Getting SuperSleight running on a page is quite straightforward, you just need to link the supplied JavaScript file (or the minified version if you prefer) into your document inside conditional comments so that it is delivered to only Internet Explorer 6 or older. <!--[if lte IE 6]> <script type="text/javascript" src="supersleight-min.js"></script> <![endif]--> Supplied with the JavaScript is a simple transparent GIF file. The script replaces the existing PNG with this before re-layering the PNG over the top using AlphaImageLoaded. You can change the name or path of the image in the top of the JavaScript file, where you’ll also find the option to turn off the adding of position: relative to links and fields if you don’t want that. The script is kicked off with a call to supersleight.init() at the bottom. The scope of the script can be limited to just one part of the page by passing an ID of an element to supersleight.limitTo(). And that’s all there is to it. Update March 2008: a version of this script as a jQuery plugin is also now available. 2007 Drew McLellan drewmclellan 2007-12-01T00:00:00+00:00 https://24ways.org/2007/supersleight-transparent-png-in-ie6/ code
166 Performance On A Shoe String Back in the summer, I happened to notice the official Wimbledon All England Tennis Club site had jumped to the top of Alexa’s Movers & Shakers list — a list that tracks sites that have had the biggest upturn or downturn in traffic. The lawn tennis championships were underway, and so traffic had leapt from almost nothing to crazy-busy in a no time at all. Many sites have similar peaks in traffic, especially when they’re based around scheduled events. No one cares about the site for most of the year, and then all of a sudden – wham! – things start getting warm in the data centre. Whilst the thought of chestnuts roasting on an open server has a certain appeal, it’s less attractive if you care about your site being available to visitors. Take a look at this Alexa traffic graph showing traffic patterns for superbowl.com at the beginning of each year, and wimbledon.org in the month of July. Traffic graph from Alexa.com Whilst not on the same scale or with such dramatic peaks, we have a similar pattern of traffic here at 24ways.org. Over the last three years we’ve seen a dramatic pick up in traffic over the month of December (as would be expected) and then a much lower, although steady load throughout the year. What we do have, however, is the luxury of knowing when the peaks will be. For a normal site, be that a blog, small scale web app, or even a small corporate site, you often just cannot predict when you might get slashdotted, end up on the front page of Digg or linked to from a similarly high-profile site. You just don’t know when the peaks will be. If you’re a big commercial enterprise like the Super Bowl, scaling up for that traffic is simply a cost of doing business. But for most of us, we can’t afford to have massive capacity sat there unused for 90% of the year. What you have to do instead is work out how to deal with as much traffic as possible with the modest resources you have. In this article I’m going to talk about some of the things we’ve learned about keeping 24 ways running throughout December, whilst not spending a fortune on hosting we don’t need for 11 months of each year. We’ve not always got it right, but we’ve learned a lot along the way. The Problem To know how to deal with high traffic, you need to have a basic idea of what happens when a request comes into a web server. 24 ways is hosted on a single small virtual dedicated server with a great little hosting company in the UK. We run Apache with PHP and MySQL all on that one server. When a request comes in a new Apache process is started to deal with the request (or assigned if there’s one available not doing anything). Each process takes a bunch of memory, so there’s a finite number of processes that you can run, and therefore a finite number of pages you can serve at once before your server runs out of memory. With our budget based on whatever is left over after beer, we need to get best performance we can out of the resources available. As the goal is to serve as many pages as quickly as possible, there are several approaches we can take: Reducing the amount of memory needed by each Apache process Reducing the amount of time each process is needed Reducing the number of requests made to the server Yahoo! have published some information on what they call Exceptional Performance, which is well worth reading, and compliments many of my examples here. The Yahoo! guidelines very much look at things from a user perspective, which is always important. Server tweaking If you’re in the position of being able to change your server configuration (our set-up gives us root access to what is effectively a virtual machine) there are some basic steps you can take to maximise the available memory and reduce the memory footprint. Without getting too boring and technical (whole books have been written on this) there are a couple of things to watch out for. Firstly, check what processes you have running that you might not need. Every megabyte of memory that you free up might equate to several thousand extra requests being served each day, so take a look at top and see what’s using up your resources. Quite often a machine configured as a web server will have some kind of mail server running by default. If your site doesn’t use mail (ours doesn’t) make sure it’s shut down and not using resources. Secondly, have a look at your Apache configuration and particularly what modules are loaded. The method for doing this varies between versions of Apache, but again, every module loaded increases the amount of memory that each Apache process requires and therefore limits the number of simultaneous requests you can deal with. The final thing to check is that Apache isn’t configured to start more servers than you have memory for. This is usually done by setting the MaxClients directive. When that limit is reached, your site is going to stop responding to further requests. However, if all else goes well that threshold won’t be reached, and if it does it will at least stop the weight of the traffic taking the entire server down to a point where you can’t even log in to sort it out. Those are the main tidbits I’ve found useful for this site, although it’s worth repeating that entire books have been written on this subject alone. Caching Although the site is generated with PHP and MySQL, the majority of pages served don’t come from the database. The process of compiling a page on-the-fly involves quite a few trips to the database for content, templates, configuration settings and so on, and so can be slow and require a lot of CPU. Unless a new article or comment is published, the site doesn’t actually change between requests and so it makes sense to generate each page once, save it to a file and then just serve all following requests from that file. We use QuickCache (or rather a plugin based on it) for this. The plugin integrates with our publishing system (Textpattern) to make sure the cache is cleared when something on the site changes. A similar plugin called WP-Cache is available for WordPress, but of course this could be done any number of ways, and with any back-end technology. The important principal here is to reduce the time it takes to serve a page by compiling the page once and serving that cached result to subsequent visitors. Keep away from your database if you can. Outsource your feeds We get around 36,000 requests for our feed each day. That really only works out at about 7,000 subscribers averaging five-and-a-bit requests a day, but it’s still 36,000 requests we could easily do without. Each request uses resources and particularly during December, all those requests can add up. The simple solution here was to switch our feed over to using FeedBurner. We publish the address of the FeedBurner version of our feed here, so those 36,000 requests a day hit FeedBurner’s servers rather than ours. In addition, we get pretty graphs showing how the subscriber-base is building. Off-load big files Larger files like images or downloads pose a problem not in bandwidth, but in the time it takes them to transfer. A typical page request is very quick, a few seconds at the most, resulting in the connection being freed up promptly. Anything that keeps a connection open for a long time is going to start killing performance very quickly. This year, we started serving most of the images for articles from a subdomain – media.24ways.org. Rather than pointing to our own server, this subdomain points to an Amazon S3 account where the files are held. It’s easy to pigeon-hole S3 as merely an online backup solution, and whilst not a fully fledged CDN, S3 works very nicely for serving larger media files. The roughly 20GB of files served this month have cost around $5 in Amazon S3 charges. That’s so affordable it may not be worth even taking the files back off S3 once December has passed. I found this article on Scalable Media Hosting with Amazon S3 to be really useful in getting started. I upload the files via a Firefox plugin (mentioned in the article) and then edit the ACL to allow public access to the files. The way S3 enables you to point DNS directly at it means that you’re not tied to always using the service, and that it can be transparent to your users. If your site uses photographs, consider uploading them to a service like Flickr and serving them directly from there. Many photo sharing sites are happy for you to link to images in this way, but do check the acceptable use policies in case you need to provide a credit or link back. Off-load small files You’ll have noticed the pattern by now – get rid of as much traffic as possible. When an article has a lot of comments and each of those comments has an avatar along with it, a great many requests are needed to fetch each of those images. In 2006 we started using Gravatar for avatars, but their servers were slow and were holding up page loads. To get around this we started caching the images on our server, but along with that came the burden of furnishing all the image requests. Earlier this year Gravatar changed hands and is now run by the same team behind WordPress.com. Those guys clearly know what they’re doing when it comes to high performance, so this year we went back to serving avatars directly from them. If your site uses avatars, it really makes sense to use a service like Gravatar where your users probably already have an account, and where the image requests are going to be dealt with for you. Know what you’re paying for The server account we use for 24 ways was opened in November 2005. When we first hit the front page of Digg in December of that year, we upgraded the server with a bit more memory, but other than that we were still running on that 2005 spec for two years. Of course, the world of technology has moved on in those years, prices have dropped and specs have improved. For the same amount we were paying for that 2005 spec server, we could have an account with twice the CPU, memory and disk space. So in November of this year I took out a new account and transferred the site from the old server to the new. In that single step we were prepared for dealing with twice the amount of traffic, and because of a special offer at the time I didn’t even have to pay the setup cost on the new server. So it really pays to know what you’re paying for and keep an eye out of ways you can make improvements without needing to spend more money. Further steps There’s nearly always more that can be done. For example, there are some media files (particularly for older articles) that are not on S3. We also serve our CSS directly and it’s not minified or compressed. But by tackling the big problems first we’ve managed to reduce load on the server and at the same time make sure that the load being placed on the server can be dealt with in the most frugal way. Over the last 24 days we’ve served up articles to more than 350,000 visitors without breaking a sweat. On a busy day, that’s been nearly 20,000 visitors in just 24 hours. While in the grand scheme of things that’s not a huge amount of traffic, it can be a lot if you’re not prepared for it. However, with a little planning for the peaks you can help ensure that when the traffic arrives you’re ready to capitalise on it. Of course, people only visit 24 ways for the wealth of knowledge and experience that’s tied up in the articles here. Therefore I’d like to take the opportunity to thank all our authors this year who have given their time as a gift to the community, and to wish you all a very happy Christmas. 2007 Drew McLellan drewmclellan 2007-12-24T00:00:00+00:00 https://24ways.org/2007/performance-on-a-shoe-string/ ux
181 Working With RGBA Colour When Tim and I were discussing the redesign of this site last year, one of the clear goals was to have a graphical style without making the pages heavy with a lot of images. When we launched, a lot of people were surprised that the design wasn’t built with PNGs. Instead we’d used RGBA colour values, which is part of the CSS3 specification. What is RGBA Colour? We’re all familiar with specifying colours in CSS using by defining the mix of red, green and blue light required to achieve our tone. This is fine and dandy, but whatever values we specify have one thing in common — the colours are all solid, flat, and well, a bit boring. Flat RGB colours CSS3 introduces a couple of new ways to specify colours, and one of those is RGBA. The A stands for Alpha, which refers to the level of opacity of the colour, or to put it another way, the amount of transparency. This means that we can set not only the red, green and blue values, but also control how much of what’s behind the colour shows through. Like with layers in Photoshop. Don’t We Have Opacity Already? The ability to set the opacity on a colour differs subtly from setting the opacity on an element using the CSS opacity property. Let’s look at an example. Here we have an H1 with foreground and background colours set against a page with a patterned background. Heading with no transparency applied h1 { color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); } By setting the CSS opacity property, we can adjust the transparency of the entire element and its contents: Heading with 50% opacity on the element h1 { color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); opacity: 0.5; } RGBA colour gives us something different – the ability to control the opacity of the individual colours rather than the entire element. So we can set the opacity on just the background: 50% opacity on just the background colour h1 { color: rgb(0, 0, 0); background-color: rgba(255, 255, 255, 0.5); } Or leave the background solid and change the opacity on just the text: 50% opacity on just the foreground colour h1 { color: rgba(0, 0, 0, 0.5); background-color: rgb(255, 255, 255); } The How-To You’ll notice that above I’ve been using the rgb() syntax for specifying colours. This is a bit less common than the usual hex codes (like #FFF) but it makes sense when starting to use RGBA. As there’s no way to specify opacity with hex codes, we use rgba() like so: color: rgba(255, 255, 255, 0.5); Just like rgb() the first three values are red, green and blue. You can specify these 0-255 or 0%-100%. The fourth value is the opacity level from 0 (completely transparent) to 1 (completely opaque). You can use this anywhere you’d normally set a colour in CSS — so it’s good for foregrounds and background, borders, outlines and so on. All the transparency effects on this site’s current design are achieved this way. Supporting All Browsers Like a lot of the features we’ll be looking at in this year’s 24 ways, RGBA colour is supported by a lot of the newest browsers, but not the rest. Firefox, Safari, Chrome and Opera browsers all support RGBA, but Internet Explorer does not. Fortunately, due to the robust design of CSS as a language, we can specify RGBA colours for browsers that support it and an alternative for browsers that do not. Falling back to solid colour The simplest technique is to allow the browser to fall back to using a solid colour when opacity isn’t available. The CSS parsing rules specify that any unrecognised value should be ignored. We can make use of this because a browser without RGBA support will treat a colour value specified with rgba() as unrecognised and discard it. So if we specify the colour first using rgb() for all browsers, we can then overwrite it with an rgba() colour for browsers that understand RGBA. h1 { color: rgb(127, 127, 127); color: rgba(0, 0, 0, 0.5); } Falling back to a PNG In cases where you’re using transparency on a background-color (although not on borders or text) it’s possible to fall back to using a PNG with alpha channel to get the same effect. This is less flexible than using CSS as you’ll need to create a new PNG for each level of transparency required, but it can be a useful solution. Using the same principal as before, we can specify the background in a style that all browsers will understand, and then overwrite it in a way that browsers without RGBA support will ignore. h1 { background: transparent url(black50.png); background: rgba(0, 0, 0, 0.5) none; } It’s important to note that this works because we’re using the background shorthand property, enabling us to set both the background colour and background image in a single declaration. It’s this that enables us to rely on the browser ignoring the second declaration when it encounters the unknown rgba() value. Next Steps The really great thing about RGBA colour is that it gives us the ability to create far more graphically rich designs without the need to use images. Not only does that make for faster and lighter pages, but sites which are easier and quicker to build and maintain. CSS values can also be changed in response to user interaction or even manipulated with JavaScript in a way that’s just not so easy using images. Opacity can be changed on :hover or manipulated with JavaScript div { color: rgba(255, 255, 255, 0.8); background-color: rgba(142, 213, 87, 0.3); } div:hover { color: rgba(255, 255, 255, 1); background-color: rgba(142, 213, 87, 0.6); } Clever use of transparency in border colours can help ease the transition between overlay items and the page behind. Borders can receive the RGBA treatment, too div { color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); border: 10px solid rgba(255, 255, 255, 0.3); } In Conclusion That’s a brief insight into RGBA colour, what it’s good for and how it can be used whilst providing support for older browsers. With the current lack of support in Internet Explorer, it’s probably not a technique that commercial designs will want to heavily rely on right away – simply because of the overhead of needing to think about fallback all the time. It is, however, a useful tool to have for those smaller, less critical touches that can really help to finesse a design. As browser support becomes more mainstream, you’ll already be familiar and practised with RGBA and ready to go. 2009 Drew McLellan drewmclellan 2009-12-01T00:00:00+00:00 https://24ways.org/2009/working-with-rgba-colour/ code
208 All That Glisters Tradition has it that at this time of year, families gather together, sit, eat and share stories. It’s an opportunity for the wisdom of the elders to be passed down to the younger members of the tribe. Tradition also has it that we should chase cheese downhill and dunk the nice lady to prove she’s a witch, so maybe let’s not put too much stock in that. I’ve been building things on the web professionally for about twenty years, and although the web has changed immeasurably, it’s probably not changed as much as I have. While I can happily say I’m not the young (always right, always arrogant) developer that I once was, unfortunately I’m now an approaching-middle-age developer who thinks he’s always right and on top of it is extremely pompous. What can you do? Nature has devised this system with the distinct advantage of allowing us to always be right, and only ever wrong in the future or in the past. So let’s roll with it. Increasingly, there seems to be a sense of fatigue within our industry. Just when you think you’ve got a handle on whatever the latest tool or technology is, something new comes out to replace it. Suddenly you find that you’ve invested precious time learning something new and it’s already old hat. The pace of change is so rapid, that new developers don’t know where to start, and experienced developers don’t know where it ends. With that in mind, here’s some fireside thoughts from a pompous old developer, that I hope might bring some Christmas comfort. Reliable and boring beats shiny and new There are so many new tools, frameworks, techniques, styles and libraries to learn. You know what? You don’t have to use them. You’re not a bad developer if you use Grunt even though others have switched to Gulp or Brunch or Webpack or Banana Sandwich. It’s probably misguided to spend lots of project time messing around with build tool fashions when your so last year build tool is already doing what you need. Just a little reminder that it’s about 100 times more important what you build than how you build it.— Chris Coyier (@chriscoyier) December 10, 2017 I think it helps if we understand why so many new solutions exist. Most developers are predisposed to enjoy creating new things more than improving established systems. It’s natural, because it’s actually much easier and more exciting to create something new that works exactly how you think it should be than to improve an existing, imperfect solution. Improving and refactoring a system is hard, and it takes real chops, much more than just building something new. The consequence of this is that new tools appear all the time. A developer will get a fresh new idea of how to tackle a problem – usually out of dissatisfaction with an existing solution, and figure the best way to implement that idea is to build something new around it. Often, that something new will do the same job as something old that already exists; it will just do it in a different way. Sometimes in a better way. Sometimes, just different. xkcd: Standards That’s not to say new tools are bad, and it’s not bad that they exist. We shouldn’t be crushing new ideas, and it’s not wrong to adopt a new solution over an old one, but you know what? There’s no imperative to switch right away. The next time you hit a pain point with your current solution, or have time to re-evaluate, check out what’s new and see how the latest generation of tools and technologies can help. There’s no prize for solving problems you don’t have yet, and heading further into the desert in search of water is a survival tactic, not an aspiration. New is better, but also worse Software, much like people, is born with a whole lot of potential and not much utility. Newborns — both digital and meaty — are exciting and cute but they also lead to sleepless nights and pools of vomit. New technology contains lots of useful new features, but it’s also more likely to contain bugs and be subject to more rapid change. Jumping on a new framework is great, right until there are API changes and you need to refactor your entire project to be able to update. More mature solutions have a higher weight of existing projects on their shoulders, and so the need to maintain backward compatibility is stronger. Things still move forward, but in a more controlled way. So how do we balance the need to move technology forward with the need to provide mature and stable solutions for the projects we work on? I think there’s a couple of good ways to do that. Get personal Use all the new shiny tools on your side-projects, personal projects, seasonal throw-aways and anywhere where the stakes are low. If you know you can patch around problems without much consequence, go for it. Build your personal blog on a CMS that stores data in the woven bark of a silver birch. Find where it breaks. Find where it excels. Find yourself if you like. When it comes to high-stakes projects, you’ll hopefully have enough experience to know what you’re getting into. Focus on the unique problem That’s not to say you should never risk using a new technology for ‘real’ work. Instead, distinguish the areas of your project where a new technology solves a specifically identified, measurable business objective, verses those where it won’t. A brand new web application framework might be fun to use, but are you in the business of solving a web application framework problem? That new web server made of taffeta might increase static file throughput slightly, but are you in the business of serving static assets, or would it be better to just run up nginx and never have to think about that problem again. (Clue: it’s the nginx one.) But when it comes to building that live sports interface for keeping fans up to date with the blow-by-blow of the big game, that’s where it might make sense to take a risk on an amazing-looking new JavaScript realtime interface framework. That’s the time to run up a breakthrough new message queue server that can deliver jobs to workers via extrasensory perception and keep the score updates flowing instantaneously. Those are the risks worth taking, as those new technologies have the potential to help you solve your core problems in a markedly improved way. Unproven technology is worth the risk if it solves a specific business objective. If it doesn’t, don’t make work for yourself - use something mature and stable. Pick the right tools Our job as developers is to solve problems using code, and do so in an effective and responsible way. You’ve been hired to use your expertise in picking the right tools for the job, and a big part of that is weighing up the risk verse the reward of each part of the system. The best tools for the job might be something cutting edge, but ‘best’ can also mean most stable, reliable or easy-to-hire-for. Go out and learn (and create!) new tools and experiment with them. Understand what problems they solve and what the pitfalls are. Use them in production for low-stakes projects to get real experience, and then once you really know their character, then think about using them when the stakes are higher. The rest of the time? The tools you’re using now are solid and proven and you know their capabilities and pitfalls well. They might not always be the fashionable candidate, but they often make for a very solid choice. 2017 Drew McLellan drewmclellan 2017-12-24T00:00:00+00:00 https://24ways.org/2017/all-that-glisters/ business
220 Finding Your Way with Static Maps Since the introduction of the Google Maps service in 2005, online maps have taken off in a way not really possible before the invention of slippy map interaction. Although quickly followed by a plethora of similar services from both commercial and non-commercial parties, Google’s first-mover advantage, and easy-to-use developer API saw Google Maps become pretty much the de facto mapping service. It’s now so easy to add a map to a web page, there’s no reason not to. Dropping an iframe map into your page is as simple as embedding a YouTube video. But there’s one crucial drawback to both the solution Google provides for you to drop into your page and the code developers typically implement themselves – they don’t work without JavaScript. A bit about JavaScript Back in October of this year, The Yahoo! Developer Network blog ran some tests to measure how many visitors to the Yahoo! home page didn’t have JavaScript available or enabled in their browser. It’s an interesting test when you consider that the audience for the Yahoo! home page (one of the most visited pages on the web) represents about as mainstream a sample as you’ll find. If there’s any such thing as an ‘average Web user’ then this is them. The results surprised me. It varied from region to region, but at most just two per cent of visitors didn’t have JavaScript running. To be honest, I was expecting it to be higher, but this quote from the article caught my attention: While the percentage of visitors with JavaScript disabled seems like a low number, keep in mind that small percentages of big numbers are also big numbers. That’s right, of course, and it got me thinking about what that two per cent means. For many sites, two per cent is the number of visitors using the Opera web browser, using IE6, or using Mobile Safari. So, although a small percentage of the total, users without JavaScript can’t just be forgotten about, and catering for them is at the very heart of how the web is supposed to work. Starting with content in HTML, we layer on presentation with CSS and then enhance interactivity with JavaScript. If anything fails along the way or the network craps out, or a browser just doesn’t support one of the technologies, the user still gets something they can work with. It’s progressive enhancement – also known as doing our jobs properly. Sorry, wasn’t this about maps? As I was saying, the default code Google provides, and the example code it gives to developers (which typically just gets followed ‘as is’) doesn’t account for users without JavaScript. No JavaScript, no content. When adding the ability to publish maps to our small content management system Perch, I didn’t want to provide a solution that only worked with JavaScript. I had to go looking for a way to provide maps without JavaScript, too. There’s a simple solution, fortunately, in the form of static map tiles. All the various slippy map services use a JavaScript interface on top of what are basically rendered map image tiles. Dragging the map loads in more image tiles in the direction you want to view. If you’ve used a slippy map on a slow connection, you’ll be familiar with seeing these tiles load in one by one. The Static Map API The good news is that these tiles (or tiles just like them) can be used as regular images on your site. Google has a Static Map API which not only gives you a handy interface to retrieve a tile for the exact area you need, but also allows you to place pins, and zoom and centre the tile so that the image looks just so. This means that you can create a static, non-JavaScript version of your slippy map’s initial (or ideal) state to load into your page as a regular image, and then have the JavaScript map hijack the image and make it slippy. Clearly, that’s not going to be a perfect solution for every map’s requirements. It doesn’t allow for panning, zooming or interrogation without JavaScript. However, for the majority of straightforward map uses online, a static map makes a great alternative for those visitors without JavaScript. Here’s the how Retrieving a static map tile is staggeringly easy – it’s just a case of forming a URL with the correct arguments and then using that as the src of an image tag. <img src="http://maps.google.com/maps/api/staticmap ?center=Bethlehem+Israel &zoom=5 &size=540x280 &maptype=satellite &markers=color:red|31.4211,35.1144 &sensor=false" width="540" height="280" alt="Map of Bethlehem, Israel" /> As you can see, there are a few key options that we pass along to the base URL. All of these should be familiar to anyone who’s worked with the JavaScript API. center determines the point on which the map is centred. This can be latitude and longitude values, or simply an address which is then geocoded. zoom sets the zoom level. size is the pixel dimensions of the image you require. maptype can be roadmap, satellite, terrain or hybrid. markers sets one or more pin locations. Markers can be labelled, have different colours, and so on – there’s quite a lot of control available. sensor states whether you are using a sensor to determine the user’s location. When just embedding a map in a web page, set this to false. There are many options, including plotting paths and setting the image format, which can all be found in the straightforward documentation. Adding to your page If you’ve worked with the JavaScript API, you’ll know that it needs a container element which you inject the map into: <div id="map"></div> All you need to do is put your static image inside that container: <div id="map"> <img src="http://maps.google.com/maps/api/staticmap[...]" /> </div> And then, in your JavaScript, find the image and remove it. For example, with jQuery you’d simply use: $('#map img').remove(); Why not use a <noscript> element around the image? You could, and that would certainly work fine for browsers that do not support JavaScript. What that won’t cover, however, is the situation where the browser has JavaScript support but, for whatever reason, the JavaScript doesn’t run. This could be due to network issues, an aggressive corporate firewall, or even just a bug in your code. So for that reason, we put the image in for all browsers that show images, and then remove it when the JavaScript is successfully running. See an example in action About rate limits The Google Static Map API limits the requests per site viewer – currently at one thousand distinct maps per day per viewer. So, for most sites you really don’t need to worry about the rate limit. Requests for the same tile aren’t normally counted, as the tile has already been generated and is cached. You can embed the images direct from Google and let it worry about the distribution and caching. In conclusion As you can see, adding a static map alongside your dynamic map for those users without JavaScript is very easy indeed. There may not be a huge percentage of web visitors browsing without JavaScript but, as we’ve seen, a small percentage of a big number is still a big number. When it’s so easy to add a static map, can you really justify not doing it? 2010 Drew McLellan drewmclellan 2010-12-01T00:00:00+00:00 https://24ways.org/2010/finding-your-way-with-static-maps/ code
264 Dynamic Social Sharing Images Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you’d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days! With so many social channels and a proliferation of content and promotions flying past in everyone’s streams, it’s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader’s attention if you’re trying to get as many eyes as possible on that sweet, sweet social content. One of the best ways to grab attention with your posts or tweets is to include an image. There’s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much. So without hard stats to quote, we’ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them. Adding sharing images The process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified. <meta property="og:image" content="https://example.com/my_image.jpg"> There’s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing. This is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don’t necessarily have an image? One approach is to use stock photography, but that’s not going to be right for every situation. This was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don’t usually lend themselves directly to being the main ‘hero’ image for an article. Putting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article. One of the hand-made sharing images from 2017 Each day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this. Hatching a new plan One initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility. The more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS! In fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn’t this just be another page on the site generated by the CMS? Breaking it down, I figured the steps needed would be something like: Create the CSS to lay out a component that could be turned into an image Add a new field to articles in the CMS to hold a handpicked quote Build a new article template in the CMS to output the author name and quote dynamically for any article … um … screenshot? I thought I’d get cracking and see if I could figure out the final steps later. Building the page The first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul. Paul’s code uses a fixed dimension container in the HTML, set to 600 × 315px. This is to make it the correct aspect ratio for Facebook’s recommended image size. It’s useful to remember here that it doesn’t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser. With the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul’s markup. I also added the quote as a new field on the article in the CMS, so each ‘image’ could be quickly and easily customised in the editing process. I added a new field to the article template to capture the quote. With very little effort, we quickly had a page to dynamically generate our ‘image’ right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article. Our automatically generated layout direct from the CMS It soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that’s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought… how could I automatically take a screenshot? Enter Puppeteer Puppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It’s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface. Headless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or… get this… rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node. Using Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that’s it’s actually Puppeteer’s ‘hello world’ example. First you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don’t need to worry about installing that. At the command line: npm i puppeteer Then save a new file as example.js with this code: const puppeteer = require('puppeteer'); (async () => { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://example.com'); await page.screenshot({path: 'example.png'}); await browser.close(); })(); and then run it using Node: node example.js This will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow: Launch a browser Open up a new page Goto a URL Take a screenshot Close the browser The async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That’s useful with actions like loading a web page that might take some time to complete. They’re used with Promises, and the effect is to make asynchronous code behave as if it’s synchronous. You can read more about async and await at MDN if you’re interested. That’s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there? Our content is up in the corner of the image with lots of empty space. That’s not great. It’s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 × 600, so we need to find out how to adjust that. Fortunately, the docs aren’t the worst and I was able to find the page.setViewport() method pretty easily. const puppeteer = require('puppeteer'); (async () => { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing'); await page.setViewport({ width: 600, height: 315 }); await page.screenshot({path: 'example.png'}); await browser.close(); })(); This worked! The screenshot is now 600 × 315 as expected. That’s exactly what we asked for. Trouble is, that’s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels. await page.setViewport({ width: 600, height: 315, deviceScaleFactor: 2 }); Perfect! We now have a programmatic way of turning a URL into an image. Improving the script Rather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site’s build process to automate the generation of the image. Our goal is to call the script like this: node shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/ And I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it. // Get the URL and the slug segment from it const url = process.argv[2]; const segments = url.split('/'); // Get the second-to-last segment (the slug) const slug = segments[segments.length-2]; We can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page. (async () => { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto(url + 'sharing'); await page.setViewport({ width: 600, height: 315, deviceScaleFactor: 2 }); await page.screenshot({path: slug + '.png'}); await browser.close(); })(); Once you’re generating the image with Node, there’s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project. You can also run optimisations on the file. I’m using imagemin with pngquant to reduce the file size a little. const imagemin = require('imagemin'); const imageminPngquant = require('imagemin-pngquant'); await imagemin([slug + '.png'], 'build', { plugins: [ imageminPngquant({quality: '75-90'}) ] }); You can see the completed example as a gist. Integrating it with your CMS So we now have a command we can run to take a URL and generate a custom image for that URL. It’s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you’re using, but it’s likely not too hard as long as you can run a command as part of the process. For 24 ways this year, I’ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I’ll look to automate this one step further. We may also look at having a few slightly different layouts to choose from, so that each day isn’t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there’s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities! By using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we’ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images. I hope you’ll give it a try! 2018 Drew McLellan drewmclellan 2018-12-24T00:00:00+00:00 https://24ways.org/2018/dynamic-social-sharing-images/ code
271 Creating Custom Font Stacks with Unicode-Range Any web designer or front-end developer worth their salt will be familiar with the CSS @font-face rule used for embedding fonts in a web page. We’ve all used it — either directly in our code ourselves, or via one of the web font services like Fontdeck, Typekit or Google Fonts. If you’re like me, however, you’ll be used to just copying and pasting in a specific incantation of lines designed to get different formats of fonts working in different browsers, and may not have really explored all the capabilities of @font-face properties as defined by the spec. One such property — the unicode-range descriptor — sounds pretty dull and is easily overlooked. It does, however, have some fairly interesting possibilities when put to use in creative ways. Unicode-range The unicode-range descriptor is designed to help when using fonts that don’t have full coverage of the characters used in a page. By adding a unicode-range property to a @font-face rule it is possible to specify the range of characters the font covers. @font-face { font-family: BBCBengali; src: url(fonts/BBCBengali.ttf) format("opentype"); unicode-range: U+00-FF; } In this example, the font is to be used for characters in the range of U+00 to U+FF which runs from the unexciting control characters at the start of the Unicode table (symbols like the exclamation mark start at U+21) right through to ÿ at U+FF – the extent of the Basic Latin character range. By adding multiple @font-face rules for the same family but with different ranges, you can build up complete coverage of the characters your page uses by using different fonts. When I say that it’s possible to specify the range of characters the font covers, that’s true, but what you’re really doing with the unicode-range property is declaring which characters the font should be used for. This becomes interesting, because instead of merely working with the technical constraints of available characters in a given font, we can start picking and choosing characters to use and selectively mix fonts together. The best available ampersand A few years back, Dan Cederholm wrote a post encouraging designers to use the best available ampersand. Dan went on to outline how this can be achieved by wrapping our ampersands in a <span> element with a class applied: <span class="amp">&</span> A CSS rule can then be written to select the <span> and apply a different font: span.amp { font-family: Baskerville, Palatino, "Book Antiqua", serif; } That’s a perfectly serviceable technique, but the drawbacks are clear — you have to add extra markup which is borderline presentational, and you also have to be able to add that markup, which isn’t always possible when working with a CMS. Perhaps we could do this with unicode-range. A better best available ampersand The Unicode code point for an ampersand is U+26, so the ampersand font stack above can be created like so: @font-face { font-family: 'Ampersand'; src: local('Baskerville'), local('Palatino'), local('Book Antiqua'); unicode-range: U+26; } What we’ve done here is specify a new family called Ampersand and created a font stack for it with the user’s locally installed copies of Baskerville, Palatino or Book Antiqua. We’ve then limited it to a single character range — the ampersand. Of course, those don’t need to be local fonts — they could be web font files, too. If you have a font with a really snazzy ampersand, go for your life. We can then use that new family in a regular font stack. h1 { font-family: Ampersand, Arial, sans-serif; } With this in place, any <h1> elements in our page will use the Ampersand family (Baskerville, Palatino or Book Antiqua) for ampersands, and Arial for all other characters. If the user doesn’t have any of the Ampersand family fonts available, the ampersand will fall back to the next item in the font stack, Arial. You didn’t think it was that easy, did you? Oh, if only it were so. The problem comes, as ever, with the issue of browser support. The unicode-range property has good support in WebKit browsers (like Safari and Chrome, and the browsers on most popular smartphone platforms) and in recent versions of Internet Explorer. The big stumbling block comes in the form of Firefox, which has no support at all. If you’re familiar with how CSS works when it comes to unsupported properties, you’ll know that if a browser encounters a property it doesn’t implement, it just skips that declaration and moves on to the next. That works perfectly for things like border-radius — if the browser can’t round off the corners, the declaration is skipped and the user sees square corners instead. Perfect. Less perfect when it comes to unicode-range, because if no range is specified then the default is that the font is applied for all characters — the whole range. If you’re using a fancy font for flamboyant ampersands, you probably don’t want that applied to all your text if unicode-range isn’t supported. That would be bad. Really bad. Ensuring good fallbacks As ever, the trick is to make sure that there’s a sensible fallback in place if a browser doesn’t have support for whatever technology you’re trying to use. This is where being a super nerd about understanding the spec you’re working with really pays off. We can make use of the rules of the CSS cascade to make sure that if unicode-range isn’t supported we get a sensible fallback font. What would be ideal is if we were able to follow up the @font-face rule with a second rule to override it if Unicode ranges aren’t implemented. @font-face { font-family: 'Ampersand'; src: local('Baskerville'), local('Palatino'), local('Book Antiqua'); unicode-range: U+26; } @font-face { font-family: 'Ampersand'; src: local('Arial'); } In theory, this code should make sense for all browsers. For those that support unicode-range the two rules become cumulative. They specify different ranges for the same family, and in WebKit browsers this has the expected result of using Arial for most characters, but Baskerville and friends for the ampersand. For browsers that don’t have support, the second rule should just supersede the first, setting the font to Arial. Unfortunately, this code causes current versions of Firefox to freak out and use the first rule, applying Baskerville to the entire range. That’s both unexpected and unfortunate. Bad Firefox. On your rug. If that doesn’t work, what can we do? Well, we know that if given a unicode-range Firefox will ignore the range and apply the font to all characters. That’s really what we’re trying to achieve. So what if we specified a range for the fallback font, but made sure it only covers some obscure high-value Unicode character we’re never going to use in our page? Then it wouldn’t affect the outcome for browsers that do support ranges. @font-face { font-family: 'Ampersand'; src: local('Baskerville'), local('Palatino'), local('Book Antiqua'); unicode-range: U+26; } @font-face { /* Ampersand fallback font */ font-family: 'Ampersand'; src: local('Arial'); unicode-range: U+270C; } By specifying a range on the fallback font, Firefox appears to correctly override the first based on the cascade sort order. Browsers that do support ranges take the second rule in addition, and apply Arial for that obscure character we’re not using in any of our pages — U+270C. So we get our nice ampersands in browsers that support unicode-range and, thanks to our styling of an obscure Unicode character, the font falls back to a perfectly acceptable Arial in browsers that do not offer support. Perfect! That obscure character, my friends, is what Unicode defines as the VICTORY HAND. ✌ So, how can we use this? Ampersands are a neat trick, and it works well in browsers that support ranges, but that’s not really the point of all this. Styling ampersands is fun, but they’re only really scratching the surface. Consider more involved examples, such as substituting a different font for numerals, or symbols, or even caps. Things certainly begin to get a bit more interesting. How do you know what the codes are for different characters? Richard Ishida has a handy online conversion tool available where you can type in the characters and get the Unicode code points out the other end. Of course, the fact remains that browser support for unicode-range is currently limited, so any application needs to have fallbacks that you’re still happy for a significant proportion of your visitors to see. In some cases, such as dedicated pages for mobile devices in an HTML-based phone app, this is immediately useful as support in WebKit browsers is already very good. In other cases, you’ll have to use your own best judgement based on your needs and audience. One thing to keep in mind is that if you’re using web fonts, the entire font will be downloaded even if only one character is used. That said, the font shouldn’t be downloaded if none of the characters within the Unicode range are present in a given page. As ever, there are pros and cons to using unicode-range as well as varied but increasing support in browsers. It remains a useful tool to understand and have in your toolkit for when the right moment comes along. 2011 Drew McLellan drewmclellan 2011-12-01T00:00:00+00:00 https://24ways.org/2011/creating-custom-font-stacks-with-unicode-range/ code
300 Taking Device Orientation for a Spin When The Police sang “Don’t Stand So Close To Me” they weren’t talking about using a smartphone to view a panoramic image on Facebook, but they could have been. For years, technology has driven relentlessly towards devices we can carry around in our pockets, and now that we’re there, we’re expected to take the thing out of our pocket and wave it around in front of our faces like a psychotic donkey in search of its own dangly carrot. But if you can’t beat them, join them. A brave new world A couple of years back all sorts of specs for new HTML5 APIs sprang up much to our collective glee. Emboldened, we ran a few tests and found they basically didn’t work in anything and went off disheartened into the corner for a bit of a sob. Turns out, while we were all busy boohooing, those browser boffins have actually being doing some work, and lo and behold, some of these APIs are even half usable. Mostly literally half usable—we’re still talking about browsers, after all. Now, of course they’re all a bit JavaScripty and are going to involve complex methods and maths and science and probably about a thousand dependancies from Github that will fall out of fashion while we’re still trying to locate the documentation, right? Well, no! So what if we actually wanted to use one of these APIs, say to impress our friends with our ability to make them wave their phones in front of their faces (because no one enjoys looking hapless more than the easily-technologically-impressed), how could we do something like that? Let’s find out. The Device Orientation API The phone-wavy API is more formally known as the DeviceOrientation Event Specification. It does a bunch of stuff that basically doesn’t work, but also gives us three values that represent orientation of a device (a phone, a tablet, probably not a desktop computer) around its x, y and z axes. You might think of it as pitch, roll and yaw if you like to spend your weekends wearing goggles and a leather hat. The main way we access these values is through an event listener, which can inform our code every time the value changes. Which is constantly, because you try and hold a phone still and then try and hold the Earth still too. The API calls those pitch, roll and yaw values alpha, beta and gamma. Chocks away: window.addEventListener('deviceorientation', function(e) { console.log(e.alpha); console.log(e.beta); console.log(e.gamma); }); If you look at this test page on your phone, you should be able to see the numbers change as you twirl the thing around your body like the dance partner you never had. Wrist strap recommended. One important note Like may of these newfangled APIs, Device Orientation is only available over HTTPS. We’re not allowed to have too much fun without protection, so make sure that you’re working on a secure line. I’ve found a quick and easy way to share my local dev environment over TLS with my devices is to use an ngrok tunnel. ngrok http -host-header=rewrite mylocaldevsite.dev:80 ngrok will then set up a tunnel to your dev site with both HTTP and HTTPS URL options. You, of course, want the HTTPS option. Right, where were we? Make something to look at It’s all well and good having a bunch of numbers, but they’re no use unless we do something with them. Something creative. Something to inspire the generations. Or we could just build that Facebook panoramic image viewer thing (because most of us are familiar with it and we’re not trying to be too clever here). Yeah, let’s just build one of those. Our basic framework is going to be similar to that used for an image carousel. We have a container, constrained in size, and CSS overflow property set to hidden. Into this we place our wide content and use positioning to move the content back and forth behind the ‘window’ so that the part we want to show is visible. Here it is mocked up with a slider to set the position. When you release the slider, the position updates. (This actually tests best on desktop with your window slightly narrowed.) The details of the slider aren’t important (we’re about to replace it with phone-wavy goodness) but the crucial part is that moving the slider results in a function call to position the image. This takes a percentage value (0-100) with 0 being far left and 100 being far right (or ‘alt-nazi’ or whatever). var position_image = function(percent) { var pos = (img_W / 100)*percent; img.style.transform = 'translate(-'+pos+'px)'; }; All this does is figure out what that percentage means in terms of the image width, and set the transform: translate(…); CSS property to move the image. (We use translate because it might be a bit faster to animate than left/right positioning.) Ok. We can now read the orientation values from our device, and we can programatically position the image. What we need to do is figure out how to convert those raw orientation values into a nice tidy percentage to pass to our function and we’re done. (We’re so not done.) The maths bit If we go back to our raw values test page and make-believe that we have a fascinating panoramic image of some far-off beach or historic monument to look at, you’ll note that the main value that is changing as we swing back and forth is the ‘alpha’ value. That’s the one we want to track. As our goal here is hey, these APIs are interesting and fun and not let’s build the world’s best panoramic image viewer, we’ll start by making a few assumptions and simplifications: When the image loads, we’ll centre the image and take the current nose-forward orientation reading as the middle. Moving left, we’ll track to the left of the image (lower percentage). Moving right, we’ll track to the right (higher percentage). If the user spins round, does cartwheels or loads the page then hops on a plane and switches earthly hemispheres, they’re on their own. Nose-forward When the page loads, the initial value of alpha gives us our nose-forward position. In Safari on iOS, this is normalised to always be 0, whereas most everywhere else it tends to be bound to pointy-uppy north. That doesn’t really matter to us, as we don’t know which direction the user might be facing in anyway — we just need to record that initial state and then use it to compare any new readings. var initial_position = null; window.addEventListener('deviceorientation', function(e) { if (initial_position === null) { initial_position = Math.floor(e.alpha); }; var current_position = initial_position - Math.floor(e.alpha); }); (I’m rounding down the values with Math.floor() to make debugging easier - we’ll take out the rounding later.) We get our initial position if it’s not yet been set, and then calculate the current position as a difference between the new value and the stored one. These values are weird One thing you need to know about these values, is that they range from 0 to 360 but then you also get weird left-of-zero values like -2 and whatever. And they wrap past 360 back to zero as you’d expect if you do a forward roll. What I’m interested in is working out my rotation. If 0 is my nose-forward position, I want a positive value as I turn right, and a negative value as I turn left. That puts the awkward 360-tipping point right behind the user where they can’t see it. var rotation = current_position; if (current_position > 180) rotation = current_position-360; Which way up? Since we’re talking about orientation, we need to remember that the values are going to be different if the device is held in portrait on landscape mode. See for yourself - wiggle it like a steering wheel and you get different values. That’s easy to account for when you know which way up the device is, but in true browser style, the API for that bit isn’t well supported. The best I can come up with is: var screen_portrait = false; if (window.innerWidth < window.innerHeight) { screen_portrait = true; } It works. Then we can use screen_portrait to branch our code: if (screen_portrait) { if (current_position > 180) rotation = current_position-360; } else { if (current_position < -180) rotation = 360+current_position; } Here’s the code in action so you can see the values for yourself. If you change screen orientation you’ll need to refresh the page (it’s a demo!). Limiting rotation Now, while the youth of today are rarely seen without a phone in their hands, it would still be unreasonable to ask them to spin through 360° to view a photo. Instead, we need to limit the range of movement to something like 60°-from-nose in either direction and normalise our values to pan the entire image across that 120° range. -60 would be full-left (0%) and 60 would be full-right (100%). If we set max_rotation = 60, that code ends up looking like this: if (rotation > max_rotation) rotation = max_rotation; if (rotation < (0-max_rotation)) rotation = 0-max_rotation; var percent = Math.floor(((rotation + max_rotation)/(max_rotation*2))*100); We should now be able to get a rotation from -60° to +60° expressed as a percentage. Try it for yourself. The big reveal All that’s left to do is pass that percentage to our image positioning function and would you believe it, it might actually work. position_image(percent); You can see the final result and take it for a spin. Literally. So what have we made here? Have we built some highly technical panoramic image viewer to aid surgeons during life-saving operations using only JavaScript and some slightly questionable mathematics? No, my friends, we have not. Far from it. What we have made is progress. We’ve taken a relatively newly available hardware API and a bit of simple JavaScript and paired it with existing CSS knowledge and made something that we didn’t have this morning. Something we probably didn’t even want this morning. Something that if you take a couple of steps back and squint a bit might be a prototype for something vaguely interesting. But more importantly, we’ve learned that our browsers are just a little bit more capable than we thought. The web platform is maturing rapidly. There are new, relatively unexplored APIs for doing all sorts of crazy thing that are often dismissed as the preserve of native apps. Like some sort of app marmalade. Poppycock. The web is an amazing, exciting place to create things. All it takes is some base knowledge of the fundamentals, a creative mind and a willingness to learn. We have those! So let’s create things. 2016 Drew McLellan drewmclellan 2016-12-24T00:00:00+00:00 https://24ways.org/2016/taking-device-orientation-for-a-spin/ code
314 Easy Ajax with Prototype There’s little more impressive on the web today than a appropriate touch of Ajax. Used well, Ajax brings a web interface much closer to the experience of a desktop app, and can turn a bear of an task into a pleasurable activity. But it’s really hard, right? It involves all the nasty JavaScript that no one ever does often enough to get really good at, and the browser support is patchy, and urgh it’s just so much damn effort. Well, the good news is that – ta-da – it doesn’t have to be a headache. But man does it still look impressive. Here’s how to amaze your friends. Introducing prototype.js Prototype is a JavaScript framework by Sam Stephenson designed to help make developing dynamic web apps a whole lot easier. In basic terms, it’s a JavaScript file which you link into your page that then enables you to do cool stuff. There’s loads of capability built in, a portion of which covers our beloved Ajax. The whole thing is freely distributable under an MIT-style license, so it’s good to go. What a nice man that Mr Stephenson is – friends, let us raise a hearty cup of mulled wine to his good name. Cheers! sluurrrrp. First step is to download the latest Prototype and put it somewhere safe. I suggest underneath the Christmas tree. Cutting to the chase Before I go on and set up an example of how to use this, let’s just get to the crux. Here’s how Prototype enables you to make a simple Ajax call and dump the results back to the page: var url = 'myscript.php'; var pars = 'foo=bar'; var target = 'output-div'; var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars}); This snippet of JavaScript does a GET to myscript.php, with the parameter foo=bar, and when a result is returned, it places it inside the element with the ID output-div on your page. Knocking up a basic example So to get this show on the road, there are three files we need to set up in our site alongside prototype.js. Obviously we need a basic HTML page with prototype.js linked in. This is the page the user interacts with. Secondly, we need our own JavaScript file for the glue between the interface and the stuff Prototype is doing. Lastly, we need the page (a PHP script in my case) that the Ajax is going to make its call too. So, to that basic HTML page for the user to interact with. Here’s one I found whilst out carol singing: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Easy Ajax</title> <script type="text/javascript" src="prototype.js"></script> <script type="text/javascript" src="ajax.js"></script> </head> <body> <form method="get" action="greeting.php" id="greeting-form"> <div> <label for="greeting-name">Enter your name:</label> <input id="greeting-name" type="text" /> <input id="greeting-submit" type="submit" value="Greet me!" /> </div> <div id="greeting"></div> </form> </body> </html> As you can see, I’ve linked in prototype.js, and also a file called ajax.js, which is where we’ll be putting our glue. (Careful where you leave your glue, kids.) Our basic example is just going to take a name and then echo it back in the form of a seasonal greeting. There’s a form with an input field for a name, and crucially a DIV (greeting) for the result of our call. You’ll also notice that the form has a submit button – this is so that it can function as a regular form when no JavaScript is available. It’s important not to get carried away and forget the basics of accessibility. Meanwhile, back at the server So we need a script at the server which is going to take input from the Ajax call and return some output. This is normally where you’d hook into a database and do whatever transaction you need to before returning a result. To keep this as simple as possible, all this example here will do is take the name the user has given and add it to a greeting message. Not exactly Web 2-point-HoHoHo, but there you have it. Here’s a quick PHP script – greeting.php – that Santa brought me early. <?php $the_name = htmlspecialchars($_GET['greeting-name']); echo "<p>Season's Greetings, $the_name!</p>"; ?> You’ll perhaps want to do something a little more complex within your own projects. Just sayin’. Gluing it all together Inside our ajax.js file, we need to hook this all together. We’re going to take advantage of some of the handy listener routines and such that Prototype also makes available. The first task is to attach a listener to set the scene once the window has loaded. He’s how we attach an onload event to the window object and get it to call a function named init(): Event.observe(window, 'load', init, false); Now we create our init() function to do our evil bidding. Its first job of the day is to hide the submit button for those with JavaScript enabled. After that, it attaches a listener to watch for the user typing in the name field. function init(){ $('greeting-submit').style.display = 'none'; Event.observe('greeting-name', 'keyup', greet, false); } As you can see, this is going to make a call to a function called greet() onkeyup in the greeting-name field. That function looks like this: function greet(){ var url = 'greeting.php'; var pars = 'greeting-name='+escape($F('greeting-name')); var target = 'greeting'; var myAjax = new Ajax.Updater(target, url, {method: 'get', parameters: pars}); } The key points to note here are that any user input needs to be escaped before putting into the parameters so that it’s URL-ready. The target is the ID of the element on the page (a DIV in our case) which will be the recipient of the output from the Ajax call. That’s it No, seriously. That’s everything. Try the example. Amaze your friends with your 1337 Ajax sk1llz. 2005 Drew McLellan drewmclellan 2005-12-01T00:00:00+00:00 https://24ways.org/2005/easy-ajax-with-prototype/ code