So over the last several months, I have been dedicating a fair amount of my time to updating our configuration managment system. Several years ago I started using Puppet for this purpose. Due to some of the limitations of both my knowledge and the SUSE Enterprise version we have been using, our methods and implementation have been in need of a good update. After spending longer than I care to admit evaluating what was out there, I finally decided Ansible is where I wanted to start.
So far… Ansible is awesome.
I have only begun to scratch the surface, but I can definitely say I’ve been able to get much further and much deeper, MUCH faster than I ever did with Puppet. Now to be fair, my experience with Puppet certainly helped give me a good jump start, but I feel like its been much easier to get in and do things quicker with Ansible. Certainly the iteration process is many times faster.
There is a few reasons for that.
Its agentless. This is so awesome, and honestly probably was the single biggest reason for deciding to try Ansible. In the grand scheme of things, I admit, its not a huge deal. However, the fact that you don’t have to authorize and manage an agent on each server is just one more layer you don’t have to worry about, or troubleshoot. All you need is a relatively up-to-date version of Python (2.6), and SSH. Simple. Being agentless also implies another awesome feature…
Its serverless. No server to run on a centralized machine. You can run all your scripts from your own workstation… or ANY workstation for that matter. That’s two less things to worry about.
Developing with Vagrant. Now this isn’t part of Ansible itself of course, but we have been slowly working Vagrant into our workflow, and it is a huge help. I can run a complete copy of whatever server I’m currently working on and test, re-test, and test some more, very quickly. If I totally screw something up, all I have to do is delete the virtual machine and re-deploy it. All on my local machine. This speeds up things dramatically with out the worry or hassle of connecting to a remote machine.
One last thing I’d like to mention is that Ansible is now owned by Redhat. Now this may not be a big deal for some people, but I feel like its nice to have some backing by a longstanding, trusted company, especially when it comes to using new technology on production machines. So far it seems that Ansible has been left to do what they do best. We will see, but for now I see this as a nice bit of insurance that it will be around a while. This also coincides with our decision to move everything to CentOS 7.
That is all I’m going to go through at the moment. There are a million posts and articles out there on “Ansible vs.” whatever configuration management flavor you’d like if anyone is curious. I’m looking forward to how far and deep we can take our Ansible implementation, and hopefully I can share some more knowledge about it in the near future.
Throughout the course of the year I had been watching the university site’s traffic on gameday. I have mentioned how gameday does affect traffic – not only the amount but also the pages people are looking for. I therefore decided to use this trend to try to encourage visits to some of our other web properties. On bowl game day we therefore swapped out the “In the Spotlight” section on the front page of the university site with links to Aggie Traditions, 12th Man, and Reveille.
My theory was that these sites all receive more hits during game weekends without any additional advertisement, so adding them on the front page should drive even more traffic. The numbers, though, are inconclusive at best. Each of these sites did – as expected – experience an increase in traffic. The increase was not as much as I would have thought, though. More telling, not as much of the increase came from referrals than I had hoped.
The university site did experience an increase in traffic, but not as much as it did for some of the bigger games during the season. This would have meant that we didn’t get as many people seeing the links to have clicked on them. Organic search still dominated as the channel by which people were landing on these sites.
We also didn’t attract as many outside visitors to the university site as with other gameday peaks. On average we had 42% new sessions during the semester. The October 8 game against Tennessee, fueled by our also hosting ESPN’s GameDay, saw a peak of 78% new sessions. For the bowl game it was only 62%. This percentage drop, coupled with the lower overall number of visitors, would imply that there were fewer visitors who might have been looking for this type of information.
The game itself may have lent itself to these trends. Having been in the Big 12 for several years, most of the Kansas State fans are probably familiar with Texas A&M. The matchup probably didn’t have the national draw that some of our in-season games did, so we would have fewer people from areas of the country who aren’t as familiar with who we are.
So while I don’t think this was a failure, it still wasn’t the success that I was hoping for. It does indicate that we need to be aware of scheduling these content changes around events that can get more outside attention than we were able to draw this year.
Solr is an open source enterprise search platform, written in Java, from the Apache Lucene project. Its major features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL features and rich document (e.g., Word, PDF) handling. Providing distributed search and index replication, Solr is designed for scalability and fault tolerance.
Databases and Solr have complementary strengths and weaknesses. SQL supports very simple wildcard-based text search with some simple normalization like matching upper case to lower case. The problem is that these are full table scans. In Solr all searchable words are stored in an “inverse index”, which searches orders of magnitude faster.
Solr exposes industry standard HTTP REST-like APIs with both XML and JSON support, and will integrate with any system or programming language supporting these standards. For ease of use there are also client libraries available for Java, C#, PHP, Python, Ruby and most other popular programming languages.
While LiveWhale does have a native handling of both events and locations, it does not create an out-of-the-box function to display “events in this location.” With a little outside programming and use of either the REST interface or a couple of widgets we can create exactly that. I don’t think it will be necessary to create such a page for all of the 1,000+ locations on campus, but I am sure there there are several – especially those buildings which are used by many organizations – for which this could be quite beneficial. This might even be something which gets fed through FourWinds onto display screens in those locations. We are looking into an implementation, hopefully we will have something to share soon.
I think it is no secret that athletics drives eyeballs, and that includes to university websites. Without getting into the merits, a look back over analytics for the past five years shows that all of our largest traffic spikes come on days of a big game. This week was no exception.
Our site traffic on Saturday (the vast majority of which came during the game) was over twice the traffic we normally see for a daily high during the week. Not so surprising perhaps, but we all know that page hits are vanity metrics – what else more interesting can we see?
Looking at the most popular pages (other than the front page) we find a completely different set of pages being viewed. Frequently Asked Questions and About Texas A&M each received twice the traffic of any other page. After those two, At a Glance, Athletics, Traditions, Admissions, and History of the University round out the rest of the most popular pages. Again, not terribly surprising. The widespread television exposure probably meant that there were lots of people coming to find out more about us. But it represents a definite change from a more normal day in the type of content being read.
The geographic location of visitors bears this out. While Texas is normally by far the most common location, it barely beat out Tennessee. While the southeast was solidly represented, other areas such as the west coast, midwest, and upper east coast were also well represented.
The one metric that really stands out is the device that visitors were using. Only 20% of visits were from a desktop. We normally see more like 65% coming from the desktop, so this represents a major shift. Perhaps they don’t want to leave the TV to go into the other room, so instead pull it up on their phone or tablet?
What can we take away from this? One thing may be that events – whether football or something completely different – have dramatic effects on who comes to our sites and what they are looking for. How many of us actually change our websites to cater to this different demand? We go to great pains to optimize our sites and hit our normal target audience’s needs, but then never touch the content again. If our goal is to present visitors with the information they want, perhaps we need to recognize this trend. Almost 80% of our traffic was from new users. How much more effectively could we have reached this new audience if we had optimized the content for them that day?
So we have been using WPEngine for several months now and been able to go through a major site launch. Sometimes making a move like this can take a while to fully evaluate, so I thought I would just give another update after having used the service for a while.
Honestly, we’ve been very happy with the experience.
WPEngine’s staging feature has been an invaluable tool in getting sites prepped and tested before being easily pushed into production. Another massively helpful use of this is in testing plugin and theme updates before applying them. Simply copy your site to the staging area, apply the updates, and then verify the results. This gives you a nearly sure-fire way of knowing if an update will break your site or not.
Also, we have found their tech support to be great. It is easy to get a hold of someone knowledgeable, and their employees are very empowered to do a lot without having to go through an escalation process. The few times we have had an issue, they were able to quickly find and fix them with one phone call. When we launched Lead By Example, we called to let them know we had an important site coming online. They went out of their way to check out everything for us from their end to make sure it all went smoothly.
They also have a pretty good status page at wpenginestatus.com. You can subscribe to the service and get email alerts for any issues that might affect you. Thankfully it is usually pretty quiet, but it is nice to have the additional notifications.
That’s about it for now. As always, let us know if you have any questions. We’d be happy to talk further with anyone about WPEngine or anything else on your mind.
Modo Labs, the vendor for our university mobile app, has created a new blog series talking about how their client schools are using the product. Texas A&M was selected to be the first university profile in this series. They interviewed our local Marcomm team and asked about how we are using the app, what the most popular links are, how we partner with campus, and much more.
I have been learning more and more about how the map entries work as I get deeper into this project. One thing that winds up being much more important than I had first thought was the Primary Category.
When you edit an event you are given the choice of a Primary Category and as many additional categories as you wish. Whatever you add to the Primary Category will be shown on the public location entry. I had always thought this was largely cosmetic – I had never clicked on that link myself any time I had looked up a location. But, if you do click the link that will act as a filter and the search column will show you all of the other nearby entries with that categorization.
This, then, makes choice of category important. We do want it to be relevant to what our location is, but at the same time we want some consistency that will allow our locations to be cross listed in this sort of search. So the College of Medicine, for example, might be better served by using “College” as its primary category and “Medical School” as a secondary category rather than the other way around.
Categories are not completely open ended – Google has a select list of allowed categories (which unfortunately are not – to my knowledge anyway – published anywhere.) This means that we will have to create our own consistency with a common set us category entries.
One of the most important parts of optimizing your Google Place entry is adding and curating photos. The selection of good photos makes for better engagement with people searching for and viewing your entry, and Google seems to like and favor those entries that include photos.
There are two types of photo entry that you will need to manage – those which you add yourself and those which have been submitted by the public.
Adding your own photos is relatively simple – just navigate into your location’s dashboard page and click the “photos” link. From there add your profile and logo image at the top, and then as many other photos as you wish. The page breaks them down into interior, exterior, team, services, and additional photos. One positive thing that I have noticed already is that when you add photos to one location they can automatically be pulled in and displayed on related locations as well.
Equally important is curating the images that are submitted from the public or pulled in through Google’s web crawls. I have found that many problems with photos on the university entry – from poor quality, to advertising from nearby businesses, to images of a completely different location. There is no magic bullet to update these. I have simply had to get into the location entry and (repeatedly!) use the Report a Problem link to recommend that the photo be removed. This generally takes several attempts, but the system does eventually respond and remove the photos.
Good news on the Google Maps front. Yesterday afternoon our central account was finally accepted as “verified for bulk uploads.” This status means that we can claim ownership of the many location entries that have been created across campus and bypass the normal process of postcards and phone calls. This allows us to get on with the project in earnest.
While this allows us to more easily move forward, we do want to do so deliberately and with a plan. This project is not something that Marcomm can do on our own. I expect that this will be an enormously collaborative project where we work with members of your teams to identify and update content, fix inaccuracies, and promote the locations.
While we are still in the planning phases, one thing that I encourage you do do is create a list of the locations which you know are associated with your college, department, or division. Start looking at information that needs to be updated. As we meet with each of you I can add members from your team as co-owners or content managers so that you can make these updates.
I will hopefully be in touch soon.