We are told in the mythical Introduction to Web Development 101 that the first thing we should do when building a new website is to establish who the audience is, and then build the site with them in mind. In practice this very often doesn’t happen. Even when we pay lip service to being audience-centric and go through the motions of identifying an audience, we very often then go forth and build the site just like any other.
Just how important this can be struck me earlier this week when one of my colleagues sent out an email saying “Sexiest website i’ve seen in a while” with a link to the Apple Macbook. My first reaction was “these animations are annoying.” Then as I continued to look it turned into frustration over how difficult it was to find actual information about the product, which in turn led to “why don’t they do this like Dell where all of the product information is in one easy to read chart”? On the face of it, both companies sell computers so it might seem logical that their websites wouldn’t be terribly different, right?
Upon further reflection, the answer is clear. Dell and Apple users are quite different. What appeals to one doesn’t resonate with the other. Both companies understand what their customers are looking for and have built their websites very differently, but with their respective audiences in mind.
We in higher ed can learn from this example. Instead of getting into the routine of doing every site in an institutional template — or the opposite, making every site with a splashy graphical experience — we should target the site design to the audience we are trying to reach. For some, a dazzling experience would be the preferred approach. A perfect example would be the Reveille site that we just launched. On the other hand, a site built for a targeted set of academics might need to tone down the design and focus on quick and efficient delivery of content. We can do both. In fact, we should do both. This type of flexibility might take some of us out of our comfort zone, but if we take our cue from two industry leaders that is exactly how we will most effectively share our content.
Those of you who attended the IT Summit a few weeks ago might remember that I did a presentation discussing the politics that can get involved in IT projects. I apologize for the delay, but I have finally gotten the slidedeck posted to Slideshare.
Conversation with the audience afterward reminded me of a few other slides that could have been included, so I have added them as an Easter Egg for anybody who saw the presentation and wants to go find them.
At last week’s IT Summit we were given a preview of the new Texas A&M System IT website. Of note is the CIO Blog section. This is the online version of weekly updates that Mark Stone had previously sent out by email. This is a great resource for keeping up to date with information on news, contracting, projects, and other important updates. For those who prefer the content for just the weekly update is also available via RSS.
I spent much of last week in Galveston for the TAMU System IT Summit. It was a great opportunity to see what is going on within the system and to network with fellow IT staff from both our campus and our sister universities and organizations.
The format of expanding the former TTVN Conference to a more general IT Summit was a success. We had many people show up who had never attended in the past. We did learn, though, that we probably started too late. Many people that I talked to said that they had only obtained permission or funding to attend the week before. I would assume, then, that many didn’t attend because they didn’t have enough lead time to make arrangements. We will therefore begin the planning process much earlier for next year so that everyone has the opportunity to attend.
For those who weren’t able to be there in person, February’s IT Forum will feature a lightning round of some of the presentations from last week’s Texas A&M Technology Summit. Each of the presenters below will speak for 10 minutes on their topic. Q&A sessions will follow.
The forum will be held Wednesday, February 18, at 3:00 p.m. in Rudder 601, and will also be streamed on TTVN channel 20.
Some thoughts and responses to Lars Damgaard’s post How to avoid ux design trends and why you should. Thanks also to some of our best designers for stimulating my thinking:
- You can’t amaze people by doing the same thing that everybody else already did.
- A web design that screams “2015″ in 2015 will also scream “2015″ in 2019. Great design doesn’t have to scream.
- Good design is obvious. Great design is transparent. – Joe Sparano
- Nevertheless, great design can be tested. Research can inspire great design.
- Designs that win awards might not win users. If you lose sight of users, you become short-sighted.
- Form must follow function. Design is problem solving and communication, not decoration.
- A new design trend is not progressive because it’s new. It’s progressive if it communicates. It’s progressive if it solves problems.
- Trends change, people don’t. The paradox? Great design can be oddly conservative, because it’s grounded in human experience.
- A well-designed glove accommodates itself to the form of the hand. It doesn’t try to break new ground by providing one less finger.
- One outdated web design trend emphasized small fonts and low contrast. Did fashion kill this trend, or did human optics?
- Trends can’t dictate the ideal size of mobile buttons because they can’t dictate the size of fingertips.
- Fashion can compel trendy college students to wear shorts. Even in winter. But not in deep snow.
Practically all Aggies know the story of E. King Gill and the 12th Man. Not all Aggies, and certainly not most people across the country, know that this is actually a registered trademark that belongs to the university. To help spread this message, as well as show the long history of the 12th Man at Texas A&M, the Division of Marketing & Communications has launched a new site which outlines the history of the 12th Man from the time of E. King Gill through the modern era.
As I’ve mentioned before we’ve been using Varnish to improve our performance on WordPress sites for a while now. Though we are on our third iteration of the cache server, there always seems to be a few kinks to work out when we launch something new. The “Today” news site is no exception.
Our first release on Today was not as smooth as we wanted. The performance was way below what we were expecting, and below what we saw on Tamutimes. We discovered that something was circumventing the cache server and directly accessing the backend web and database servers. Upon further inspection it was discovered that the culprit was some redirects from the old Tamutimes site. After fixing those, things were much smoother.
My point in sharing this is that, I thought I would take this post and the next going over two of the built-in Varnish tools for monitoring that helped us identify what was happening.
Varnishhist is a neat tool that basically just shows a graphic of what is going on with your cache server. It is invoked just by running the following on the command line:
And here is an example of what our cache server looks like at a normal time:
So reading this is fairly simple. It really is just showing the cache “hits” and “misses” and the time it took to serve the request. The “|” symbols are the hits and “#” represents a miss, or a call to the backend server. Along the bottom is the time to serve the request. Basically the more to the left the hits are the better. 1e0 is equal to one second, so 1e-1 is equal to 1/10th and 1e-6 is equal to 0.000001 or around one microsecond. Anything in the -4 to -5 range is pretty good.
Also from this graph you can roughly see that almost all of the requests are hitting the cache. None of these requests are even making it to the Apache server. This is massively helpful with WordPress. You can see how much slower the misses are when they have to go all the way through Apache and likely make a database call and return that information.
When we launched the Today site, we were seeing almost the exact opposite of this graphic. The peak was all on the misses side at around 1/10th to 1 second. We knew something was wrong immediately. While Varnishhist doesn’t show a lot of detail, it can be very helpful to have running to give you a very quick look at what is going on.
Next post I will go over the other main tool – varnishstat.
One of the projects we have been working on for a while is Digital Asset Management. Our photo team had evaluated several commercial products, and found that most, if not all, of them were too expensive for us to be able to afford as a single division. We therefore shifted gears and started looking at some of the open source solutions.
As we went through this process we found others on campus who were also looking at the same problem. Last week I sent a note out the the campus communicators’ list asking for feedback of who might be interested in getting a team together to investigate a central service. I was overwhelmed by the results; over twenty colleges, departments, or offices expressed an interest in joining.
In order to facilitate communication on this effort I have set up a local listserv list. If anyone would like to participate in the conversation please contact me and I can get you added to the list.
I look forward to this becoming a successful shared service that can befit the entire campus community.
Some takeways from Jared Smith’s ARIA Gone Even Wilder session at the Environment for Humans Accessibility Summit.
ARIA stands for Accessible Rich Internet Applications. As Jared Smith explains, “ARIA allows us to expand the vocabulary of HTML to include things that screen readers already understand.”
To do this, web developers use HTML tags and attributes. Probably the most important of these attributes is
role, but there are many others, such as
Most HTML5 tags have default ARIA roles. For example, an
h1 tag automatically has the ARIA role of ‘heading’. Use HTML, then use ARIA to fill in the gaps.
Roles don’t change browser behavior. That is, giving a
button doesn’t make it act like a
span act like a
button (clickable, etc.) If not, it will require overhead/orientation for everyone, to teach them how to use your widget. You don’t need to duplicate native roles. If you define a button using the
button tag, you won’t have to worry about faking it.
ARIA supports a lot of user interface widgets (such as menu, slider, checkbox), and users need to learn how to use them. They may know tab only and not arrow keys. What if you do it right and they don’t understand? Should we break the rules/standards? No.
Common misunderstandings about ARIA roles:
- Navs are rarely menus! Menus are File-Edit-View etc.
- Lists are rarely a listbox. Listbox is a select menu.
- Tables are rarely grids. Grids are interactive.
- Dynamic content is rarely a live region (aria-live=”assertive”) If focus is set to it, it isn’t a live region.
- Important information is rarely an alert. Alerts are read immediately.
- Not all groups of links are navigation. Navigation lets you move within a page or across the site. Social icons are not navigation.
- jQuery dialogs always have role=dialog regardless of their content, which makes them mostly inaccessible.
Navigation by type is not yet widely available. You can’t jump to it. Landmarks are usually identified anyway. It would be great for browsers to support keyboard navigation by structural elements (headings, nav, main) and ARIA landmarks. Then we wouldn’t need skip links.
Landmarks and labels
You can use aria-label or aria-labeledby to differentiate multiple landmarks of the same type.
Labels vs. descriptions: label is necessary, description is advisory
There is no aria-description, since aria-labeledby is better. It might be read twice if it’s already in context.
ARIA labels replace the link and label text. So a screenreader reads aria-label=”PDF” instead.
Use off-screen or aria-described by.
Ensure labels are on the focusable element, not parent or child or container.
HTML input attributes, however, can change behavior.
Users assume that if you can tab to it, you can interact with it. Don’t put tabindex=0 on text, only on links and buttons.
ARIA roles are an example of progressive enhancement – usually you can’t make your website less accessible by using ARIA (though it’s possible if you do it wrong), but older browsers might not see the newer attributes.
Last fall I joined the W3C‘s Responsive Images Community Group after being inspired by (and having lunch with) Mat “Wilto” Marquis, the group’s chair, when he spoke at An Event Apart in Austin. The RICG calls itself “a group of independent designers and developers working toward new web standards that will build fast, accessible, responsive websites.” Mostly my contributions have been limited to fixing typos on Github and making wisecracks on IRC, but I’m pleased to have contributed to the newly-released RICG Responsive Images WordPress plugin. (Technically, I added support for PHP 5.3 and below, versions which don’t support function array dereferencing.)
The RICG’s first achievement was to push through a new HTML element –
picture – along with its friends
sizes. These allow web developers to offer images in various sizes and let the browser decide the best one to use. A retina phone can display the 2X retina image, an older feature phone can display a little 200px image, and your ultra-wide monitor can display a full-bleed 1920×1080 image. Best of all, the browser only downloads the image it needs – no double download.
I consider responsive images a perfect example of progressive enhancement - fully supported on Chrome and Opera, and partially supported in Webkit and Firefox. Even Microsoft is working on adding these features to
Internet Explorer its new browser. And the Picturefill script adds responsive images capabilities to any browser. Until now, responsive web developers like me have been sending oversized images to be squeezed onto little phones, wasting much of our users’ bandwidth. Mat Marquis says that people in Africa actually have to keep lists of which websites they can’t visit without draining their entire monthly bandwidth. So if we use responsive images when someone’s mobile browser doesn’t support them, the user experience will be no worse off. But if the browser does, his or her experience will be much faster and more pleasant. An image on a mobile phone might only need to be 1/5 the size in kilobytes of an image on a typical laptop.
The WordPress plugin is a first step to adding responsive images to the core of WordPress itself. When that happens, webmasters of 60 million websites can upload large, clear images, and their visitors will only download the image size they need. The plugin comes bundled with Picturefill, and since WordPress already keeps track of multiple image sizes automatically, the future looks good.