Google Caffeine and the SEO benefits of HTML 5

September 2nd, 2009 Jason

Things I read today:

6 Things to Expect if Google Decaf Gets a “Caffeine” Boost
A pretty thorough comparative dig into Google vs. Google. When Caffeine rolls out of beta and then migrates to the UK, what can we expect to see?

Also did a whole bunch of reading up on HTML 5.
http://www.w3.org/TR/html5/
http://dev.w3.org/html5/html-author/
http://www.seomoz.org/ugc/7-html-5-elements-that-will-make-seo-more-enjoyable
http://www.searchengineoptimizationjournal.com/2009/06/22/html-5-seo/
http://www.ibm.com/developerworks/library/x-html5/
http://www.sitepoint.com/blogs/2009/07/24/google-html5-and-standards/
http://net.tutsplus.com/tutorials/html-css-techniques/html-5-and-css-3-the-techniques-youll-soon-be-using/
http://www.alistapart.com/articles/previewofhtml5

…the last one being a particularly useful writeup, explaining in pretty good detail but in a nicely readable way.

All this reading was prompted by a sort of ambiguous requirement included in our ongoing CMS migration – restructure code so that SEO-valuable content is at the top .

On the surface it makes sense – get the central content box as high up in the code as possible, move all the excess crap to the bottom, and position it all onscreen with CSS. A strict interpretation of this, though, is a nightmare when using shared resources across multiple sites, and causes all kinds of headaches for accessibility. A full code revamp is happening anyway to better linearize the code for accessibility, and the question came up whether coding to HTML5 standards would be sufficient. If a search engine discounts boilerplate code, can we assume that the HTML <nav> or <footer> tags will be recognized and equally discounted, regardless of where they are in the code?

Clearly nobody yet knows. What is the actual SEO benefit of HTML 5? There is absolutely logic in thinking that a cleanly organized page which essentially follows a DOM tree will be more easily crawled, and the content within more easily parsed and indexed by a spider. But that’s completely speculation at this point…and as the HTML 5 spec is still very much in draft mode (and likely to remain in development for years), we seem likely to make a stab without really knowing at all.

Posted in Google, links, seo | No Comments »

How to create your own link farm on Twitter

August 18th, 2008 Jason

Twitterfeed.com will push an RSS feed into twitter and automagically post entries on whatever schedule you want.

So, clearly, the thing to do is to set up an account for your website, and then set up accounts for all of your other websites, and just have all the accounts watch all the other accounts, creating this vast network of interlinked twitter profiles .

Um. Is this just corporate use of twitter gone horribly wrong? Ignoring the fact that all these websites are creating one big link farm already1, since almost everything on Twitter is nofollowed, I’m trying to think through the logic of this approach. Is it about visibility, is it about traffic, or is it just about getting the content indexed?

1since these guys are consistently showing up as #1 in Google for ‘jobs in [location],’ what I see as very dodgy interlinking would seem to actually be doing them some good.

Posted in Google, links, seo | No Comments »

Search Marketing Bravado

July 14th, 2008 Jason

I was walking through Charing Cross station the other day when I saw a large billboard for a major mobile phone company. What caught my eye was the call to action: Search ‘I am.’

Gotta say, that’s some confidence there.

One of our brands has recently been pursuing a campaign with a similar angle – instead of a web address, we’ll prompt you to simply drop this key term into a search engine, and dimes to dollars (pence to pounds?) you’ll find our site at the top of the results.

This particular campaign is being managed by an external agency, and apart from dropping a link on the main brand site, the current Google rankings are being driven exclusively by their linkbuilding efforts – and the fact that they’ve accomplished the promised #1 and #2 spots for the two landing pages may speak well for them…though you know it helps that the term in question is absolutely unique on the web. It’s kind of like, "why does Flickr rank number 1 when I search on the word flickr?"  Maybe because it’s a totally made up word? That’s what this campaign has done. The real challenge is raising awareness of the term in the first place so people know to search for it.

Certain search marketers will have business cards which simply prompt you to search on their name. "Just google me – you’ll find me." That’s a little easier to control, perhaps, unless you’re named John Smith. Then, I imagine, it’s a bit more of a game. But I could do that, and I haven’t really been trying too hard. I’m just more active online than that guy in Tucson is, now that he’s stopped racing bikes and getting listed in the sports section every two weeks. But as a potential client, you want to be able to look up the person you are paying and know that they can do their job – and if most of the top ten results point to the same person, you get a pretty good feeling that they know how to play the game.

I have to say, though…I was impressed by the cajones involved behind a campaign that was relying on being able to perform based on something as generic as ‘I am.’ Even my jaded self was prompted to go plop down on the laptop and do the search – and this is where it backfires.

See, I use the CustomizeGoogle extension on Firefox, and I block paid ads. What do I see when I search on "I am?" Humorously, i-am-bored.com. Wikipedia. A London-based branding consultancy who you can bet wasn’t part of this campaign. I Am Legend. I Am Kloot. And a bunch of other stuff that has nothing to do with Orange mobile communications. Whoops.

Of course, when I enable ads or search on Yahoo!, there they are, right at the top…but how disappointing.  That’s just a matter of being willing and able to spend more money than anyone else, and once again for something that probably not a lot of people are either searching or bidding on. And I’m let down, just a bit, because somebody had some rocks to sell this advertising idea, but the execution, in my book, is far less impressive than it could be.

Posted in Ask, Google, MSN, Yahoo, plugins, seo | No Comments »

The great SEO race

May 7th, 2008 Jason

I’m working out a little internal competition for my team.

We’ve got a handful of small niche sites that recently went live. Each site is a thin vertical slice of information aggregated from several broader datasets, essentially mashed up with other new and relevant content. The plan is for us to each take charge of one site, and within the limits of our acceptable SEO practices, go head-to-head to build links, secure SERP placement, and generate traffic.

Each niche is roughly equivalent in terms of specificity and search volume, and the only budget is time – there are no paid campaigs, so it’s seems a pretty even playing field. I’m thinking that we benchmark on several factors:

  • overall page impressions + percentage growth
  • page impressions from organic search + percentage growth
  • Google SERP position on a pre-defined set of 5 to 10 key terms
  • inbound links as counted by Yahoo!
  • downstream traffic to the parent sites (this is the conversion metric)

Benchmarks will be taken at 2 month intervals for 6 months, with a lunch on the line for the mutually agreed leader at each checkpoint.

Dev resource and access to certain tools (Hitwise) is limited, and that’s a detail we need to work out before this can start, but there will be some ability to make changes to the sites themselves and do competitive research.

It will be an interesting challenge, to say the least.

Posted in Ask, Google, MSN, Yahoo, seo | No Comments »

How to get the same page indexed twice

April 7th, 2008 Jason

OK, I can’t actually tell you how this has happened, I’ve just noticed that it has.

My wife does a London theatre blog. I was looking up her past performing search terms, and then looking around to see what the competition was.

Looking through the results for her current all-time top search term, I found a listing for an entry in another blog.

Impempe Yomlingo on Google at #21

And then I found it again.

Impempe Yomlingo on Google at #31

Same page. Same URL, same title/meta information, etc. etc. etc. The only difference I find is that the last-modified-since header is 2 seconds different between them. Now, I’ve seen Google index things really quickly, but that’s a little hard to swallow.

So. Why does Google list it twice, as if it is two unique pages? Inquiring minds want to know.

(It’s a great writeup of a great show, by the way.)

Posted in Google, theatre | No Comments »

SES NY Day 4: Meet the crawlers

March 20th, 2008 Jason

Session brief:

Representatives from major crawler-based search engines cover how to submit and feed them content, with plenty of Q&A time to cover issues related to ranking well and being indexed.

Yahoo!, Google, and MSN all put themselves on the podium and each ran through an overview of their webmaster tools. Anyone who is already using these tools and/or keeping up on the industry blogs probably didn’t get much new out of it, though I did catch just a couple tidbits:

  • Yahoo! will now accept the robots.txt sitemap.xml URL being on a different domain. I think this is pretty new, and useful for anyone who may have issues with getting things hosted on their corporate servers.
  • The protocol for supplying a Google news feed is different than the standard sitemap.xml protocol. I hadn’t looked into this much in the past, so that’s probably not new, but it’s good information to have heard.

For once, the Q&A was the heart of the session, and after a bunch of standard “I have this very specific issue with my site” kind of questions, I couldn’t resist the opportunity to try raising a ruckus with a meaty one. I wanted to know how the engines were currently viewing the use of the rel=”nofollow” attribute on internal links to a website’s own pages. Though Matt Cutts has gone on record about it several times, I think there’s still confusion, so I asked. The responses:

  • Sean Suchter of Yahoo! said they are not using a nofollowed link for calculating the “link quality distribution” score for a page. He did not specifically say, “don’t use it on internal pages,” but he did specifically say, “I would be wary of using it for sculpting pagerank.”
  • Evan Roseman of Google did a marvelous bit of dancing and basically said “go find Matt’s post,” but also said that there were some situations where an internal nofollow might be appropriate. He did not comment on the use of it as a sulpting/siloing device. (The post in question is, I believe, on Matt Cutt’s blog here, but there are other more recent – and not always clearly consistent – quotes on SEOMoz, SEORoundtable, and Dave Naylor’s blog as well. In fact, there’s a lot of opinion out there, if you do as Evan suggested and Google: Matt Cutts nofollow )
  • Nathan Buggia from MSN dodged entirely and simply said he thought there were better uses of your time and money than worrying about it. Which, interestingly, is pretty much what Matt said in Dave Naylor’s post, and with which, honestly, I have to completely agree…though he didn’t, technically, answer my question.

I was a bit amused that Evan referred to the question several times as a “pretty advanced topic.” Sorry, man, the SEO 101 session was three days ago, and I didn’t cross the ocean to go to it.

Posted in Google, MSN, SES, Yahoo, conferences, seo | No Comments »

SES NY Day 2: Orion panel on Universal Search

March 18th, 2008 Jason

Official pitch:

Search result multiplicity is not a new phenomenon, but recent advancements will guarantee the world of search and marketing will be changing forever. Before you attend this week’s optimization and best practices sessions, hear from industry gurus about how search, marketing and information seeking is changing the industry that follows the search. Our ongoing series on universal search will include research data available only at SES.

I almost didn’t go to this, and in fact showed up a bit late, but in the middle of a day of sort of uninspiring sessions, this genuine conversation in panel format ended up making me glad I went.

As I walked in, comScore’s James Lamberti was discussing a very interesting graph they’d built. Their research into universal search results showed a direct correlation between type of search result and clickthrough rate. In their model, if “no universal results” provided a 100% clickthrough rate, including video results showed a slight decrease to (I think) 98%. As more types of results came into play (images, maps, and so on), the clickthrough rate continued dropping, and result sets including “news” or “stock quotes” were showing less than 50% clickthrough.

Predictably, the Google rep-du-jour (Jack Menzel) then got raked over the coals for the rest of the session and spent a lot of time denying that they’d changed their business model. If they were intentionally providing information which did not lead people to click off the page, aren’t they then becoming a portal site? How are they going to monetize this, and how will that affect the downstream sites ability to monetize themselves?

Lamberti commented that the future value in search results will be not in the click, but in what is being displayed in the results. If people are clicking less, then it’s all the more important to be showing them something of value in the window of opportunity you have. The follow on question I have is a practical one: how do you measure this? Right now, universal results are showing for a small percentage of search traffic, but I have no idea if my sites are showing as part of an integrated SERP or not. I can’t get impression data for organic results, now, can I? No, I cannot.

The other Big Issue that put Jack on the hotseat was the fact that Google owns space in many of the channels now listing in the universal search results, and it’s hard to believe that there is no bias. YouTube has the most traffic and the most videos, but does that mean they have the best video for a particular result set? No. But the perception is that YouTube gets preference because Google owns it. Is it true? Jack insisted not.

Of course, with a G-man on the stage, the conversation was bound to focus there, but clearly Yahoo! and Ask and everyone else are taking their result sets in this direction as well, and in the theoretical or “big picture” view, the questions directed at Jack are going to be relevant to all. The final takeaway comments from the panelists were worth summing up, as they really seemed to encapsulate some very key bits of the future of search:

  • Lyndsay Menzies, Managing Director of Big Mouth Media thinks it is important to understand how the searchers of today are different people. There is a whole generation growing up with YouTube and Flickr and social networks and they are interacting with the web in new ways, and their expectations are different than Google’s original “ten blue links.”
  • Lamberti agreed, opining that this is the way search has to go, because it’s what the consumer wants.
  • Jack Menzel simply said Google were not changing their business model at all: they are continuing to try presenting the best, relevant content on the web.Universal search is just reflecting the fact that there are more images, there is more video and images and maps and etc. available.
  • Finally, John Battelle of Federated Media commented that we’re at a unique turning point, paralleling it to the shift from DOS to Windows. The difference being, instead of 200 developers in Redmond creating something in a vacuum, Google is engaging their users and advertisers in a conversation, and this is just one step along a continuum of changes leading to an interface and experience we don’t yet know.

All in all, quite a provoking conversation, one which offered no solutions or tips and tricks, but addressed some hard questions and I think left everyone in the room with a lot to consider.

Posted in Ask, Google, SES, Yahoo, conferences, seo | No Comments »

Google opens new development office in Seattle

January 17th, 2008 Jason

Nice envy-making article in the Seattle P-I yesterday about Google’s new offices in Fremont. Dog-friendly, waterfront offices in one of the best parts of town. Lava lamps. Kayaks. The Red Door tavern.

I used to work across the street from there. Adobe is a half-block down the street. The offices were until recently occupied by Getty Images. With Microsoft, Yahoo!, Amazon.com, RealNetworks, Expedia, and countless startups in the region, there’s no denying that Seattle is still second only to Silicon Valley in the tech industry…the argument perhaps being whether, considering cost-of-living and quality of life, Seattle is really first these days.

Posted in Google | No Comments »

Does new equal better at Google now?

January 2nd, 2008 Jason

An interesting question on the Google Operating System blog, related to my post about Google’s quick indexing: does the new hi-speed indexing mean that newer pages are being artificially weighted to rank higher?

The argument is that a brand-new page won’t have a bunch of backlinks pointing to it, so there’s no reason it should appear near the top of the SERPs directly after being indexed…unless Google is giving greater importance – at least temporarily – to newer content.

Maybe it’s only true for whatever they currently identify as ‘hot topics,’ but it’ll be worth watching. I may try a little experiment to document later today or next week; I’ve certainly seen Google give a nice big boost to a newly indexed page before it fell off into a more stable position. If, by interpreting ‘more recent’ content as ‘more relevant’ content, Google has slipped up here, it’s only a matter of time before the blogspammers start capitalizing on it by simply posting more crap more frequently to maintain consistent high rankings.

Posted in Google, seo | No Comments »

How quick is Google?

December 8th, 2007 Jason

I was at Pubon’s “Tools of the Trade” session yesterday afternoon. I took some notes, but missed a URL that I found myself wanting to check out this evening.

Todd Malicoat (aka Stuntdubl) was speaking, and mentioned a bookmarklet he used which would give a listing of the sites associated with an IP block. Right, so, Google: “stuntdubl number of sites on IP”

Number two is a blog entry titled “Tools of the Trade,” which is the seoroundtable liveblog entry from the session. With a reference to the tool I’m after: seolog’s Reverse IP domain tool. Boom.

I know that Google has gotten really good in the last few months with basically “instant indexing,” but this is the first time I’ve really seen it in action. Nice. A little scary, but impressive and powerful as well.

Posted in Google, pubcon, seo, tools | No Comments »