skip to main content

TAMU Webmaster's Blog


Information and insight from the A&M Webmasters

seo

Is Google+ Still Relevant?

I have heard many musings about Google+ in the last several months – it is going away, there are so few people on it that it is useless, it is a waste of time, and quite a few more. Granting that the exposure that you could get on the Google+ platform is inconsequential compared to posting on such platforms as Facebook and Twitter, is there still value in maintaining a presence on Google+?

If your only concern is social media, then perhaps not. If, however, your goal is driving traffic to your website or expanding the reach of your message then the answer is a resounding YES!

The screenshot below should be enough to convince you. This was taken from a search today for “Texas A&M University.”

Google Knowledge Graph for search of Texas A&M University

When we have good Google+ posts they show up on the Google Knowledge Graph – the search page’s right column. This is the area of the page that people not familiar with the term they are searching for look first, usually even before the page links returned. This means that we in effect have the ability to put whatever message we want to convey on the front page of the search returns. You (literally) can’t buy that kind of exposure.

Perhaps, then, we need to shift away from thinking of it as a companion social media platform (where its value is rather limited) and more to a companion to our search efforts and broadening the overall reach of our messaging.

Tags: ,

Tuesday, May 26th, 2015 Search No Comments

SEO Consulting

The SEO research that we did this summer and which I posted to this site was never intended to be a “one and done” project.  The intent was always to offer this information, and our team’s time, to other university and system offices in a series of presentations and individual consultations.  I had been busy reworking the blog posts already online into a more focused presentation format, but a conversation yesterday showed me that I needed to announce this service sooner rather than later.

If any of you would like some help, whether it be just a simple conversation or a full-blown review of your site, please get in touch with me.  This certainly applies to any of the colleges, departments, and divisions on campus, but we are also wanting to assist the system universities and agencies as well.

Tags:

Thursday, September 11th, 2014 Search No Comments

SEO Report, Phase 2 – Techniques to Avoid

As with everything, there is a dark side to SEO. So-called black hat organizations recognize the importance of search returns as well and will try to manipulate the system to get search returns pointed to their pages. Search engines getting good at spotting efforts to trick them, though. As a legitimate organization, just don’t do it – either yourself or by hiring a firm that practices these methods. These techniques will often get your site penalized and you will wind up worse than if you had not implemented SEO.

Keyword cramming

In its infancy, a search engine was little more than a system of looking up key words. The importance of a page to the topic was measured by the amount of times a key word appeared. If an article used the words “car” and “automobile” several times then the algorithm would give this page a high page ranking for those terms. This led to the obvious use of keyword cramming – simply using the key words over and over within the content of the page. Some early black hat techniques even added a whole section of nonsense text below the actual page content in order to fool the search engines.

After search engines defeated these crude attempts at manipulation, emphasis (even among white hat SEO firms) turned to crafting content so that your key work fit into the flow of the page but was still repeated multiple times. There were even formulas for how many times you could repeat a term without triggering a penalty for stuffing.

Today these techniques do more harm than good. Yes, you must use key words to get the content into the search index, but don’t overdo it. After the second or third use of the key word in a single page, the search algorithm adds incrementally less importance to it. Instead, focus on writing in a natural language for your audience. When we write for humans we still get our point across but usually use synonyms and other phrasings to get our point across.

Deceptive content

There are a range of techniques for using content to deceive the search engines. Again, search firms are well aware of these and actively penalize this type of behavior. The most common deceptive practice is to try to serve different content to search engines than is served to real readers. Practices as simple as hiding content with style sheets (hiding text by using margins to position it off the page, for example) to more complex things like sniffing the user agent and serving different content to the search engines are all practices that are abused and should be avoided.

Similarly, black hats attempt to manipulate Page Rank by creating link farms and other methods of artificially increasing the number of pages linking to their site (another technique is comment spamming on blogs as mentioned previously.) It was common practice several years ago to create pages that were nothing more than a collection of links to the sites they wanted to promote. They would buy up dozens of domains and include this content on each of them. Many of these domains were typos based on the original site’s domain name, hoping to get people to accidentally (or through search links) find their site and click through to the original. Search engines recognize this type of site and now penalize instead of reward for it.

A similar practice is to host the same content on different sites, hoping to double the exposure. There were even legitimate reasons for doing this at one point – a domain name changed but the old one was kept as an alias to keep links from breaking, guest bloggers contributed the same article to many different sites, etc. Now it is seen as manipulation and does incur a penalty. (For situations such as the first example of a domain name change, you should set up 301 Redirects or at least use cononical meta tags to indicate the preferred address.)

While not inherently deceptive, the meta keyword tag has been so abused since the beginning of search that is it do longer even considered in creating page scores. Its primary use now is to include common misspellings and other things that you want the search engine to see – it is still read, it simply does not contribute value – but which for cosmetic reasons you don’t want your users to see.

Conclusion

Google is quite clear and quite consistent when it comes to its advice on search optimization. The best way of getting getting good placement for your page on the returns page is to write good content. That means writing content on a topic that people want to read and in such a way that the content is written for the reader rather then for the search engine. The term optimization itself implies making enhancements to content that already exists, not creating original content based upon it. Google holds strongly to this principle, believing that these are exactly the types of pages that are of benefit to users. The more beneficial the page is the more likely it is for them to create a link or share it on social media. These links are then what becomes the backbone of the site’s Page Rank.

This doesn’t mean ignoring SEO altogether. After all, optimization is an active process. Understand how search engines work and present your pages in the best possible light, but in the end understand that you are writing for your readers and not for a search robot.

Tags:

Thursday, August 14th, 2014 Search No Comments

SEO Report, Phase 2 – Outside Influences

Social media, blogs, the Knoweldge Graph

Google recognized the importance of social media several years ago and has purposefully made it a larger and larger factor in factoring page rankings. Several factors play into this emphasis. First, because of their nature, social media channels tend to be regularly updated with fresh content. We have already seen that fresh content even in traditional web pages is part of the ranking system. Also, social media is very personal to its users. People write about and link to things that are of interest and importance to them. When people click a “share this” icon on a web page they do so because they saw something of value in that content and thought it important enough to share with their social connections. This, then, aligns perfectly with the search engine’s goal of providing information that is relevant and useful to their own user.

Social media channels have evolved quickly over the past several years. When search engines started emphasizing social content most people left their profile pages open to the public, making it easy for the search spiders to find and index the content. In today’s environment of increased emphasis on privacy, most people have their profiles set to allow only friends to view their content. This makes it increasingly difficult for search engines to get to the high-quality content they are looking for.

Of all the social channels, Google+ is quickly becoming the most important in terms of SEO. The fact that this service is owned by the dominant search engine company probably is not a coincidence. Almost everything Google does is related in one way or another to their core focus of search, even if that relationship is not easily visible to the casual observer. The way they have structured the Google+ service, then, unsurprisingly is optimized for getting information into the search engine.

Most blogs and social sites understand how search engines work in terms of using links on their pages to add “juice” to the link’s importance. These platforms have made a conscious effort, in part because of past abuses, to limit this and mark their outgoing links as “nofollow” or “noindex.” This tells search engines not to count this link in the Page Rank of the target page. In other words this link adds no value to the target’s search ranking. Google+ is different. Links there are DoFollow, which means that search engines will include these links in calculating the target’s ranking. This immediately makes Google+ more important than other platforms in terms of your SEO strategy.

Beyond simple likes (plus-ones in Google+ terminology) the primary benefit of actively managing your Google+ account is localization. Localization is a huge emphasis for search engines right now. They understand that to give you the most relevant returns for your search query the returns should be local to your area. A search for “restaurants,” for example, should show those in your area rather than a nation-wide return. Google+ is directly tied to Google Maps, so it lets you manage your location information yourself rather than being dependent on some outside factor or agent.

There is also a pilot project to link Google+ accounts to authorship of articles on the web. Author information on blog posts or news articles, for example, would have identifiable information about the author. Influential writers could then gain additional exposure. Google has been making a lot of changes to this program, such as no longer including the author’s photo on returns pages, so it is still too early to say how important this mechanism will be.

Google+ is also vitally important to your overall organizational recognition. Whenever someone searches for your organization the return page will often show a Knowledge Graph (the contextual information that shows up on the right-hand column.) For a well used account, the Knowledge Graph can return links to your Google+ account or even the content of a recent post. See figure below for a real-life example.

Screenshot of the Google Knowledge Graph for the search "Texas A&M University"

Blogs can be important, if they are configured to be search engine friendly. To be a successful blog the content must be kept fresh, which we know is important to search ranking. As a blog becomes more popular and reaches more people its affect can grow. Configuration is the key, though. Blogs must be set to allow “juice” to flow to the links’ target pages. The bad guys know this, though, and so target blogs for comment spamming – posting links to their sites in comments hoping to gain Page Rank from the blog site. The comments section, then, must either be moderated to not post these comments or have the site code written so that comments are marked as NoFollow.

Given the large, and increasing, importance of video, YouTube is a vital SEO element. While we don’t think of it as such, YouTube is actually the number two search engine in the world. According to recent reports it gets more searches per month than Bing, Yahoo, Ask, and AOL combined. Given that Google owns YouTube, and we have already mentioned how everything they do is tied in to search, this should be another primary focus of an SEO strategy. Optimizing your YouTube account, then, will increase your standings in both the #1 search engine (YouTube videos are included in standard Google returns) and the #2 search engine, YouTube itself. This can be a powerful advantage over your competition. Most firms have Facebook and Twitter accounts and put their primary focust there. Fewer firms have YouTube accounts and even fewer of those put real effort into making it a central part of their SEO (or even social) strategy.

While not quite “social media,” Wikipedia and similar open databases are still part of the Web 2.0 framework and are also critical parts of SEO. Wikipedia is one of the most popular pages on the Internet and has become the first choice of online information about a topic. Making sure that your Wikipedia page is accurate and up to date, then, is important in terms of protecting your brand reputation. Wikipedia does implement a NoFollow policy on its articles, so links from that site do not add to our site’s Page Rank. Do not, then, try to edit your organization’s entry to add links and hope it will boost your SEO (abuse of this in the past is probably why the NoFollow policy had to be implemented.) Content of the entry, though, is vitally important. See the figure above and notice that the text written in the Knowledge Graph comes straight from Wikipedia. Since Wikipedia is also on the first page of returns for most searches you can get double exposure for your message.

Related to Wikipedia, and also used by the Knowledge Graph is Freebase. Freebase is an open directory (owned by Google, so keep in mind what that implies) that allows the public curate information about a range of topics. Much of the information that gets posted in the Knowledge Graph comes from Freebase (in the screenshot above this would include address, mascot, enrollment, acceptance rate and colors, which are all prominent features of the Knowledge Graph.) This gives us the ability to directly affect the Google returns page regardless of the pages that are included in the results.

Tags:

Friday, August 8th, 2014 Search No Comments

SEO Report Phase 2 – Site Structure

Many elements of a site’s structure affect how search engines interpret and rank the pages. Each of these should be taken into account from the first stages of site design so that you don’t end up with a poorly optimized site that has to be restructured after-the-fact.

Page download speed

Google has publicly announced that download speed is a factor that affects search ranking. They have further announced that slow-performing mobile sites would be penalized. They feel that a poor performing site leads to a bad user experience, and should therefore receive less promotion within the results page. They have not said outright how they measure speed, but the SEO industry has set up several tests in order to make educated guesses on how the algorithm works. They have found that in practice speed is not taken into account enough to affect the top-ten rankings. It seems to only be a factor when speeds are extraordinarily slow and sites are ranked closely enough that speed can be used as a determining factor. This should not be taken as free reign to make slow sites, though. Google is trusted because it gives its users what they want, and as performance becomes more and more of an issue the download speed can easily become more important within the algorithms.

Robot exclusion

Since the beginning of the web there has been a need to instruct automated website crawlers, often known as spiders, how to view your site. The robots.txt file, which must be located in the site’s root directory, is the accepted standard. It allows the site owner to declare which spiders may access the site and what parts of the site are excluded from crawling.

Search engines are the best example of an internet spider, and most legitimate search engines honor robots files. We can take advantage of this in terms of optimizing our site for searches by excluding pages or directories that we do not want indexed.

Robots.txt files should not be the only method employed, though. If a pages that is indexed contains a link to any of the pages that are excluded, that link can still show up in the search results. If the page header can be templated, or if only a small number of pages need protecting, a meta tag is actually preferable.

Orphan pages

Understand how search engines work in terms of creating their index. They crawl from one page to another through following links. Be careful, then, not to create orphan pages that do not have any links pointing to them. They will never show up in the search index. Remember that inbound links are an important part if creating a page’s rank.

Images vs. text

Another important characteristic of a spider that it processes your page as text. It cannot visualize what is in your images and impart any meaning from them whatsoever. Therefore, use text instead of images, Flash, or other technologies to convey important information. Search can’t understand information conveyed visually through other media. For images that you do use, be sure to include alt tags, that can at least add some sense of what you are trying to convey. Keep in mind, though, that keywords appearing as text do outweigh those found in alt tags. Using a text browser such as Lynx will give you a good idea of how search engines see your site.

Sitemaps

Sitemap files should be a must when creating and updating any site that you want indexed in the major search engines. These are not the old HTML sitemaps that are simply links to every page on your site. Instead they are specifically formatted XML files that list your pages, which pages you consider the most important, and the last updated time.

This file can in many ways act as a shortcut for the search engines. It allows you to explicitly state the organizational framework of your site, specify which pages you consider to be most important, and let the search engine know when each page was last updated. This last feature can be particularly important if you make a change deep within your site that you want indexed quickly.

Redirects

Broken links are generally perceived as bad for a site’s search optimization. This can make site redesigns and even general site maintenance a problem when page URLs get removed or renamed. Keep in mind that even though your site might have updated its links to the new structure, there are likely dozens if not hundreds of links to your former pages from sites that you don’t control, all of which will be creating 404s. When a page’s URL can’t be maintained, use redirects to the new page location. Further, use true 301 Redirect statements at the server level rather than meta refresh tags within a stub page. While popular and easy, these refresh statements act only after a preset period of time once a page has been hit, which can disrupt the crawling process.

Search engines and javascript

One common question is whether search engines can index content inside of javascript. The short answer is “sometimes.” It really depends on the technique used to embed the content. One popular use is tabbed windows to show different content. A SEO firm put these to the test, using different off-the-shelf scripts. Some of the scripts allowed Google to index the content and associate it with the parent pages, others crawled the content but indexed it as a completely separate page. So if you want to be sure, put content in plain text.

Tags:

Wednesday, August 6th, 2014 Search No Comments

SEO Report, Phase 2 – Page Elements Part 2

Links

Links are an important part of our page. They are good indicators of what content the page authors perceive as important. As such, search engines naturally place great weight on them as well. They probably take more aspect of links into account than any other part of a web page. In general, more weight is given to keywords in a link than to those in plain text.

Outgoing links

Outgoing links are, as just described, often the content elements that the writers think are important. They link to other sites to show that the content is important, to allow the viewer to get further information on a topic, or as a way of showing where the authoritative site for a certain topic is.

Choice of where to link, though, is important. If you have lots of links to sites that are perceived as “bad” then your page will actually be penalized. That’s why reciprocal link schemes are a bad idea for us. They gain from our trusted .edu domain, but we lose reputation by linking to them. Several links to important content resources, though, makes your site content more valuable and can even result in other sites linking to you.

One good use of outbound links is when we use the external site as an “authority” resource… our site provides valuable content but we then link to a site that is generally recognized as authoritative on specific content elements within our page. Consider making links to such trusted domains as are found on the Moz Top 500 list.

Like with keywords, links that are located higher in the page outweigh those located lower in the page. Best practice, then, will be to write content so that important links can be placed in the first paragraph or two.

You do have to be careful, though, not to create too many links. An SEO concept to be aware of is link bleeding. In general, each page’s link value is divided among all of the links on the page. In theory, then, having a large number of links can dilute the value. According to Google this shouldn’t be seen as an issue for normal content links, but you might want to consider using “nofollow” markers for things like tag clouds or links to sites that have a negative reputation. Tag clouds in particular can be mistaken for link stuffing since they are made up of several unrelated links right next to one another.

Be sure to keep your page links up to date. Broken links can be a sign of stale content and potentially subtract from your page rank.

Incoming links

Incoming links are a way of determining how popular your site is, and popularity implies trust. The more outside pages you have linking to your site the more weight the page will be given. As with everything else, all incoming links are not equal. Links from other trusted sights are much more important than others, and referrals from known link farms can actually penalize your score.

Building up these external links to your audience is one of the hardest parts of optimizing your site. It is not something that generally has a technology based solution. Old fashioned techniques like issuing press releases, making comments on (legitimate) blog and forum post with a link in your signature, and collaborating with peers to link to one another’s content are all effective. Care must be taken, though, to not become associated with link exchange programs or link farm marketing schemes. These can actually penalize your page rank.

That being said, the advice directly from Google is to spend your time concentrating on content rather than link building. Their mindset centers around providing good content. If you are a recognized authority and produce content that people want to consume, the people will link to it and the issue will take care of itself. Becoming the “go to” destination is easier than artificially building up links.

Structured data

Structured data is becoming more and more important to SEO. Structured data is code added to the HTML markup that adds semantic meaning to your content. Search engines can pull out this added context and use it to help understand your page as a complete entity rather than a collection of individual words. Search engines can now extract this semantic data and present it as rich snippets within their return text or within the Knowledge Graph. As schema structures become more robust having some sort of semantic formatting of your pages is going to become increasingly valuable. Right now using structured data will provide an edge over those sites which do not use it. In the future not using it will be a significant competitive disadvantage. There are several types of structured data syntax, but Google recommends microformats, which are fully documented at schema.org.

Domain

Domain is generally considered to be an important SEO data point, but realistically there is little that we can do about it here on campus. We pretty much all live under the .tamu.edu domain, with some of us further having our own subdomains. This is a bit of a mixed blessing.

The great advantage that we have is that .edu domains are considered to be more trusted than other commodity domains such as .com or .net. This means that by default we will get at least a little boost for our content just because we are an educational unit. (This is also why we get so many emails asking us to exchange links. They know that links from .edu sites provide great SEO value to them while their link back to us does nothing. Please be aware of how outgoing links affect both the target site as well as your own when choosing what to link to.)

The disadvantage we face with our domain is that tamu.edu does not contain our primary university name, and we can’t change that because the ampersand (&) is a reserved character and can not be part of a domain. Having a key word in a domain is a great benefit to search returns, so consider that if you ever have a project that requires buying an outside .com domain.

Tags:

Monday, August 4th, 2014 Search No Comments

SEO Report, Phase 2 – Page Elements Part 1

Page content

Google uses over 200 factors in analyzing web pages. Page content is the single most important factor that search engines use when analyzing web pages and ranking them on the SERP. Google now places a great deal of importance on semantics and word relationships. That is, it tries to interpret the meaning of your entire page rather than simply looking for individual key terms. Therefore, you should write for your users and not for the search engine. If you build a page with good content that your visitors can easily use and understand, the search engines will also be able to understand what your page is about. Using lists and pictures interspersed within the text shows complex content that can be rewarded. Conversely, poor grammar and spelling (while not officially one of the factors that Google considers) do correlate to lower page rankings.

Over half of all searchers are looking for local information. Search engines have naturally started catering to those types of searches and look for clues within your pages that can be used to serve up location-based returns. It is therefore important for all sites, and perhaps even all pages, to include locational-based information such as address and phone number.

While search engines do try to account for the meaning of pages as a whole, they still must account for search queries being keyword based. You should therefore formulate a list of key words that you want to be known for and be sure to include them multiple times (but not too many) within your page. Like an English 101 essay, the main point of any web page should be spelled out at the top, so search engines give particular importance to key words that are in the first words.

Note, understand what this implies for long-scroll pages. Keeping content on a page centered on one topic makes it clear to the search algorithms what your page is about. Having multiple sections of content covering completely different topics dilutes your message and gives the largest weight to the content in the earlier sections.

Search engines generally reward for freshness. Keeping important pages regularly updates is therefore important. Old content can be perceived as stale content. Similarly, a fresh burst of links pointing to a page can be a sign of freshness and thereby provide a rankings boost. These combinations make blog news sites particularly powerful in elevating your message.

Title tag

After page content, the <title> tag is probably the most important factor under our control. Title tags should include your primary key word, and research has shown that titles starting with the key word slightly outperform those that simply contain the word. The W3C specification for the title tag recommends that it be less than 64 characters in length (Google will display 66 characters and then truncates even if it is in the middle of a word.)

The title tag is usually what gets displayed on the SERP, so make sure it is readable and makes sense on its own. Even if your page does not achieve top placement, a good title that can catch the eye is often just as effective even if it is further down the page.

Also make sure that all pages have a unique title that correctly describes the content of the page.

Meta description tag

Meta description tags are not heavily weighted in determining page rank, but they are nonetheless important in enticing users to click on your link instead of others on the SERP. This is the text that is shown underneath the page’s link on the SERP. The content here should be short, one or two sentences containing somewhere around 150 characters. They should be a concise description of the page’s content, ideally containing your chosen key words. Note how the example below is overly wordy and gets cut off without really conveying the heart of what we want to say.

Screenshot of an entry on a Google return page

Writing meta tags is often perceived as drudgery, but crafting the copy that goes into them is as important as writing the main page copy. High placement on the SERP isn’t going to help much if the title and description don’t grab the viewer.

Meta keyword tag

This was probably the most abused HTML tag in the early days of SEO. It no longer carries any weight in determining page rank. Keywords contained in this tag are fine, but past over-stuffing has led all major search engines to discount what we put there. That does not, however, mean the spiders don’t read the content, they just don’t give it any added weight. One good use of the tag, then, is to include elements (like common misspellings) that would be beneficial for the search engine to see and index but which you don’t want visible in your main content.

Heading tags

Heading tags – <h1>, <h2>, <h3> etc. – are a vital part of SEO. They give structure to your page and help organize content. Think of them as a page outline. Headers, being the equivalent of section titles, summarize the meaning of important content blocks on the page. Search spiders therefore look for key words appearing in heading tags, giving the most weight to those in <h1>, then <h2>, and so forth down the line.

Page URLs

Your page’s URL is its address on the internet. Having your keywords appear in a page’s URL is generally considered to be important in getting that page ranked in the search engines. While most of us click on links to navigate to a page now, and do not manually type URLs into the browser’s address box, the format of the URL is also important to determining search rank. Search engines use the content of the URL to help determine site structure as well as page content. Search engines here are no different from human users – creating a URL that is readable and concise makes it easier for the user to quickly understand what to expect on the page.

First, avoid passing parameters in the URL if possible. SEO researchers have shown that links with more than two parameters are often not spidered unless they are otherwise perceived as important links. Parameters allow the developer to pass information from one page to another. For example the link http://www.university.com/page.html?parameter=value&parameter2=value2 passes two values to the target page. Instead of using this structure, think of converting the application to a REST structure where the parameters are rewritten like directory names (http://www.university.com/parameter/value1/parameter2/value2. Most modern programming frameworks make this relatively easy.

The choice of characters used in the URL is also important. Blogging software does a good job of making descriptive URL names by simply converting the article title into a destination address. We can manually do the same so that the file name is a multi-word entity. Separating the file name into multiple words is preferable to running the words together with no space (fileName.html) because the search engines can recognize the individual words and act on them appropriately. Separate the words with a hyphen (-) rather than an underscore (_) or encoded space (%20). Google explicitly recommends the hyphen as the preferred word separator to make sure that it indexes each word independently.

Tags:

Thursday, July 31st, 2014 Search No Comments

SEO Report, Phase 2 – Introduction

I have finished the second round of my SEO research, gathering a set of recommendations and best practices that can apply to any organization.  As with the previous report, I will post sections of it here spread across the next several weeks.  I hope to turn this into a presentation format and eventually be able to do consultations with offices across campus, and perhaps present it at a uweb meeting.

Introduction

For all the hype about search engine optimization, at its heart it is about marketing. While the techniques used might be technical in nature, the end goal is to better position our pages on the search engine return page (SERP) so that more people will see it and click through to our site. A 2011 study indicated that the #1 position in Google’s search results received 18.2% of all click-through traffic, the second received 10.1%, the third 7.2%, the fourth 4.7%, and all others below 2% each. Clearly, then, it is in our best interest to be ranked as highly as possible.

In order to make recommendations regarding changes we should make in our sites’ search optimization, we must first lay down the guidelines that should be followed. In many ways SEO is very formulaic. Know the rules, implement them on your page, and monitor the search industry for any changes made to the algorithms that determine rankings.

Google is very open about how you should build pages to achieve the best results, but they do not go into explicit detail on weights given to each element of a page when calculating page rankings. SEO firms, through a combination of listening closely to what Google/Bing developers say publicly, patent research, and code testing have helped fill in some of these areas to allow for more specific recommendations.

As you read through the following posts, please keep in mind that these recommendations are meant only as a brief overview. They touch on the important parts of each topic, but I am well aware that much more could have been included. If you feel strongly about something that wasn’t mentioned, please feel free to leave comments and start a conversation.

Tags:

Wednesday, July 30th, 2014 Search No Comments

SEO and HTML5

We switched all of our sites over to HTML5 over the last two years. During the transition I saw many advantages of HTML5. Its new tags gave greater structure to the page and cut the related style sheets by more than half. I suspect now that it was also having an impact on our search results.

Google says outright that they like structured data. While they are primarily referring to microformats and other specifically structured code, I can’t help but think the added structure within HTML5 pages has to be picked up. Tags such as <nav>, <article>, and <footer> all have specific structural meanings which are relevant to what Google itself says is important.

Navigation is considered a central element, and now we can explicitly state the links that we consider navigation. While we often used <div class="nav" to mark navigation, and I’m sure Google was able to pick up on that, having an actual tag doing the job insures that our intent is properly interpreted.

Similarly, the <article> tag allows us to tell search engines exactly what content we consider the main focus of the page. Along with the accompanying <aside> tag that lets us mark content sections as supplemental information, we now have much greater control over the structural meaning of our pages. This would be especially important on something like WordPress-based news site where there are multiple articles on the same page.

Link bleed is the concept of having “too many” links on your page, such that the overall link value that you get is diluted or even reduces your page rank. In previous versions of HTML this could further be exacerbated by the large number of common links in the pages’ header and footer section. The new <header> and <footer> tags, though, allow search engines to clearly see the purpose of these sections and discount the links contained within them.

In theory this all sounds good, but I won’t pretend to be an SEO professional. I have not set up any A-B testing to empirically see whether or how much influence these things actually make on organic return listings.

Tags:

Wednesday, July 2nd, 2014 Search No Comments

SEO Report – Status of Marcomm Sites, Part 5

Paid Keywords

There is no SEO evidence that buying paid ads improves your organic ranking in search returns. All search engine companies have very firm walls between their commercial divisions and search technology division. Further, most search engines can recognize ads and discount them when calculating the importance of incoming links. However, anecdotally we do still see increases in rankings for enterprises participating in paid search programs.

We currently do not use paid ads on any keywords. This puts us at a competitive disadvantage to several of our peers. About half of our Vision 2020 peers use paid search ads to some degree. Most relatively small, but some make very significant investments.

Most other schools are focusing on online courses and their business school. The emphasis on courses probably is the reason that ad buys often vary largely by month…users are strategic about placing ads when students are actively looking for their courses. Bypassing emphasis on courses and expanding into our grand challenges could be a way of setting us apart from other higher eds.

Conclusion

This report is meant to simply show the current state of search results to the campus web properties that we own. While it does include rudimentary analysis, it is not intended to offer strategic recommendations on how to implement an SEO program. That will come later in a separate report. This report is instead meant to lay the groundwork for those recommendations. To inform where we are now so that we can create goals for where we want to be and how to get there.

Tags:

Monday, June 30th, 2014 Search No Comments

Categories

Archives