Thursday 31 October 2013

[Build Great Backlinks] TITLE

Build Great Backlinks has posted a new item, 'What Scares Google?'

Posted by Dr-Pete
As SEOs, we spend a lot of our collective time afraid of what Google might do
next. This Halloween, I thought maybe it was time to turn the tables. It's easy
to think of Google as an unstoppable force, but, like any company, Google has
weaknesses and their behavior suggests some very real fears about the future.

Fear #1: Lack of revenue diversity

Google does everything, right? They've got Chrome, Android, Google Glass,
Motorola Mobile, self-driving cars, flying WiFi, and now they're even trying to
make you immortal. It all makes for great PR, except for one very important
factâthis is how Google's revenue broke down in Q3 of 2013:




Factor in profitability, and the situation gets even worse (Motorola Mobile
operated at a loss in Q3). Compared to physical products or even traditional
advertising, AdWords and AdSense are as close to magic money-making machines as
you're going to find. Google didn't just find a pot of goldthey found the only
key to Leprechaun City, and the door locks from the outside. If the leprechauns
escape, Google is in trouble, and no self-driving car is going to find them.

Fear #2: Falling cost-per-click (CPC)

Even as Google's revenues continue to rise, their average CPC has fallen for
eight quarters in a row. So far, Google has managed to offset this CPC fall by
increasing overall impressions and creating advertising enhancements that drive
higher click-through rates (CTRs), but the trend is a very real problem and
absolutely tops Google's list of worries. What's driving this trend? That leads
us to #3...

Fear #3: Changing face of mobile

Traditionally, mobile ads have just been cheaper than desktop ads, and as
mobile devices proliferate, average CPCs have fallen. This problem led Google to
take an extreme approachâthey forcibly rolled out "Enhanced Campaigns" to
all advertisers, effectively removing the option to have separate bids on mobile
devices.


The problem for Google is that this sleight-of-hand doesn't remove the reality
of how consumers behave on mobile phones and tablets, where traditional search
advertising is simply less effective (at least, so far). There's also just less
space for ads. Consider this desktop search result for "artificial christmas
trees":




Counting paid product placement, there are parts of 14 ad units visible on one
screen. There are 19 total ad units on the page (the right-hand AdWords block
contains 8 ads). Now compare this to the same query on iOS7 on my iPhone 5S:




On one screen of mobile results, there are only two visible ads, with five
total ads (two before and three after the organic results). Google promotes the
message that mobile is becoming more like desktop every day, as screen size and
resolution increases, and hybrid devices (like "phablets") become more popular.
The reality, though, is that mobile is still a unique animal, and will be for
the foreseeable future.


Google's development also suggests that they don't really believe this
desktop/mobile unification story. Desktop search UI is being driven more and
more by advances in the mobile UI. As smartphone traffic grows and Google dives
into even more experimental directions (like Google Glass), consumer behavior is
evolving quickly, and it's unclear how this evolution will change our
interactions with advertising.

Fear #4: Fickle investor confidence

Most days, Google is still a darling to investors, but as a publicly traded
company their amazing history is both a blessing and a curse. Google's core
revenues (not counting Motorola) have been up every quarter since Q1 of 2011:




It's a great story, except for one problemâGoogle is a mature company
with massive market share. The expectation that Google can continue to grow,
quarter after quarter, indefinitely, is unrealistic bordering on ridiculous. Of
course, investors don't want to hear that. Google will have a bad quarter, and
their investors have been trained on good news for far too long.


We tend to believe that someone has to beat Google at their own game, and that
a competitor like Bing has to best them at search. The reality is that Google is
fighting their own market expectations, and if Google fails to meet expectations
by enough, they may start to unravel.

Fear #5: The Facebook factor

We tend to focus on whether Facebook can ever compete with Google on search,
but there's one area where the social giant dominates Google. People go to
Facebook and stayâthey go to Google to leave as quickly as possible.
Google's entire model flies in the face of the traditional advertising
philosophy of doing everything possible to increase pageviews and time-on-site.


Google is keenly aware of this problem. In addition to Google+, they've made
many moves in the past year that seem to be designed to increase pageviews. For
examples, carousels (including the local carousel) and related searches in
Knowledge Graph boxes don't lead to outside sitesâthey lead directly to
more search results. Google is testing new Knowledge Graph entities that use
data from third-party sites but then link prominently to more Google searches.
For example, we recently spotted this KG entry in testing:






All of the blue links in this box (there are 7 visible in this image) go to
additional Google searches. Only the smaller, light-gray links go to the
original source websites.


Put simply, while Facebook may be struggling to define its revenue model, the
social giant is a platform. It's a place people go to do things, and it's a
place people spend a lot of time. For most of us, Google is a place we go to for
quick answers and then leave. The faster and better Google is at search, the
faster we leave, and for a company with 84% of its revenue tied up in
advertising, this is a serious problem.

Fear #6: Government regulation (US/EU)

I put this one last for one reason â while I think US and/or EU
regulators could theoretically cause significant harm to Google, I don't think
either government has the political will to crack down on an industry giant.
Google's problem, though, is that they can't simply play nice. They have to push
the envelope with advertising, and that's going to mean an ongoing battle with
regulators.


Take this recent example of a paid shopping result we spotted in testing (the
live version is a bit different):




Other than the "Sponsored" designator at the top, this paid shopping result
looks a lot like a Knowledge Graph entry. The test version is even placing one
selected provider and [Shop now] button before the specs and other information.
With CPCs falling, Google is going to keep pushing harder, and they're going to
keep testing government regulators' limits.

Why should we care?

This is not a "gotcha" post, and I don't necessarily think that Google is
doomed to fail. What I do think is that it's vital to maintain a healthy
perspective about Google's motives and possible futures. Last year, I said that
my #1 SEO tip for 2013 was to diversify. If I wrote that article again, I doubt
I'd change much. If your entire business is built on Google, you're riding a
wave that's eventually going to crash into the shore. It may be because Google
changes the rules, or it may be because they fail, but you've built your future
on something you absolutely can't control. If you understand Google's fears and
aspirations, you may at least start to appreciate why it's critical to build
your business on more than one marketing channel.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten
hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think
of it as your exclusive digest of stuff you don't have time to hunt down but
want to read!



You may view the latest post at
http://feedproxy.google.com/~r/seomoz/~3/d5B2ZpNXjyk/what-scares-google

You received this e-mail because you asked to be notified when new updates are
posted.
Best regards,
Build Great Backlinks
peter.clarke@designed-for-success.com

Tuesday 29 October 2013

[Build Great Backlinks] TITLE

Build Great Backlinks has posted a new item, 'Case Study: White-Hat Link
Building in the Gambling Industry'

Posted by sammirandaThis post was originally in YouMoz, and was promoted to the
main blog because it provides great value and interest to our community. The
author's views are entirely his or her own and may not reflect the views of Moz,
Inc.
During 2012, Google clamped down on poor link building tactics, eliminating
directories, article submission sites and adjusting the criteria for natural
links. Consequently, the gambling industry has been facing the daunting task of
restructuring its content marketing and SEO initiatives. Abusing article
directories and paying for guest posts with keyword rich anchor text no longer
cut the mustard.


Alongside brand building through social media and delivering value-added
content, white-hat link building is high on the agenda to restore rankings. But
it's often dubbed mission impossible by gambling marketers.


Traditionally, gambling websites are short of linkable assets. First-party
games often constitute a casino's most valuable content, but they're developed
infrequently and reputable websites are hesitant to link to gambling-related
content because of the social stigma attached to the industry.


White-hat link building (an admittedly contentious term) is possible. In this
post I'm going to outline four strategies that I have obtained from my
experiences of content marketingâspecifically 'guest posting' for want of
a better termâfor a gambling affiliate website.


To conclude, I'll also provide three examples of the valuable backlinks I've
managed to obtain through using these tactics.

1. Lead generation

Gambling is a multi-faceted entity, incorporating psychology, legislation and
social issues. It features heavily in sports, discourse surrounding marketing
and advertising techniques, and even celebrity culture. Contrary to popular
belief, the scope for gambling related content is massiveâit stretches far
beyond the roulette guides and blackjack strategies found on poorly constructed,
niche gambling websites.


Content marketers let the stigma attached to gambling dictate their
initiatives, saying "There's no way awesomedomain.com will link to a gambling
website." But this blinkered outlook represents a wasted opportunity. Providing
there's no explicit material, a website should link to any credible source that
enhances reader understanding.





The kind of headline that demoralises gambling industry marketersStart to
build a diverse list of online publications that can be approached for guest
posting. Ask yourself the following questions:

Do they accept freelancer contributions or guest posts?
Do they accept organic links in the article body?
If not, do they at least offer a promotional link in the author byline?
Are outbound links restricted to trusted contributors? In this case, you'll
need to build up your credibility before benefiting from links.

Note: I dislike the term guest posting as it's often (now) associated with
systematic efforts to produce mediocre articles and place them on any website in
a similar niche. I do not endorse, nor follow this churned approach to content
production. However, we'll use the term to keep it simple!


Finish by categorising your leads based on the subject (i.e. business,
education, entertainment etc) and the website's SEO metrics (page rank, citation
flow, trust flow, PA etc).

Top takeaways
Don't let the stigma attached to your niche cloud your thought process.
Think of guest posting as feature writing, not copywriting. Avoid the churn!
It is not always about getting a link straight away. Sometimes you'll need to
prove your worth with valuable posts to build up trust and credibility.
2: Topic generation

My topic generation tends to fit into three subject categories.

Gambling

The first is gambling itself. You should aim to cover the full emotional
spectrum, from negative articles surrounding consumer gambling addiction to more
imaginative, uplifting pieces covering novelty bets and celebrity gamblers. You
don't have to glorify gambling. For instance, you might want to take a critical
standpoint towards PaddyPower's agreement with Facebook to launch a
sports-betting app, highlighting the perils of social gambling. This would
interest any gambling B2B website.

Marketing and business

Ironically, the second subject area is exactly what I'm doing now. When you're
discussing anything business or marketing related, you can write objectively
about the gambling sector. Gambling websites are known for audacious
advertising, flashy design and clever conversion optimisation, making them
perfect case studies for marketing and UX-related articles.


Though valuable, deep-links to your gambling website's core landing pages are
hard to embed as organic links within an article body. Rarely is it ever organic
to link to a page full of gambling bonuses, but it is possible. If you're
discussing website design and innovation, you can specify an excellent landing
page, which gives you ammunition for an organic link in a user experience post.


You can also look within for an engaging business story. Does your company
have a colourful history? Is your CEO a budding Richard Branson? Entrepreneur
websites love to feature original case studies, and should be happy linking to
your website if it underlines an intriguing corporate venture.

Shareable content

The third area is shareable, viral content. The internet is awash with trend
websites that disseminate funny and digestible content. You should be looking to
jump in with a snappy, "Top 10 Amazing Bets" kind of list that incorporates a
mix of images, videos and memes.


I've hijacked a quirky "question asking" formula from viral scientist Jonah
Berger to drill out facts and generate interesting ideas. Using 'roulette' for
exampleâ


Who chooses to play roulette?
What types of roulette are there?
What can we learn about the type of person who plays X version of roulette?


Now to mix it up a bitâ


Where do those people come from?
What is the majority gender?


Now make it controversialâ


Are people from region X more prone to gambling? Are men playing the "live"
version more? Is this because they are physically attracted to the croupier?


As you can see, questioning your own topic triggers a web of interesting and
contentious content - the kind of material which a much wider audience can
relate to, enjoy and share. Another creative formula I use for topic generation
is 'subject + random category or buzzword '. For instance:


Roulette + films (which brings me to the iconic Russian roulette scene from
The Deer Hunter).


Roulette + social (which brings me to the webcam-based phenomenon Chat
Roulette).


Roulette + travel (which in the case of Heineken, brought them to a video
whereby holiday makers were offered to play 'Departure Roulette' and board a
flight to a random destination).


Roulette + magic (which brings me to popular British mentalist Derren Brown's
'Russian Roulette' trick).

Writing

Once you've seeded a topic and an angle, you should be looking to delegate the
writing of an article to a crack in-house writerâsomeone with a passion
for journalism and developing their online presence. In my experience,
outsourcing to freelancers or an agency comprises quality and article
authenticity. The work is thin on research, low on personality and possesses a
'churned' feel to it, which brings me back to my stereotypical guest post gripe.
Make sure you leverage the knowledge of your internal teamâi.e. your
designer for design-related materialâto cover all potential article bases.

Top takeaways
Explore your niche. It is sure to bring up topics that bear wider social
significance.
Have you successfully implemented a marketing campaign? Is your business doing
great? Tell your own company story.
Brainstorm and generate shareable content. Use the "question asking" formula
above to come up with interesting topics.
3: Original and convincing outreach

Here's a fantastic post entitled "Revealed: Outreach Campaigns from some of
the Biggest SEO firms." It underlines just how useless some SEO agencies are at
establishing credibility and building rapport with editors and webmasters. They
have to resort to manufactured guest-post outreach.


My outreach is far more tailored and elaborate. I throw in a bio, examples of
my published work and a brief employment history. There really is no substitute
for published work, and I'm fortunate enough to have articles on websites like
Buzzfeed and The Bleacher Report. My emails will be personalised, complimentary
and explain why my content is suitable for the website's target demographic.
Email outreachâsummarised perfectly by this infographicâis a science
in its own right.




I sugar-coat my job role (senior editor at a gaming information portal) and
justify my outreach on the grounds of a writer wanting to broaden his horizons
and bolster his portfolio. For the most covert infiltrations, I pose as a
journalist looking for an actual job as a remote freelancer. Though I'm
approaching these websites for a link to my company website, it's not always at
the forefront of my agenda. I want to diversify my writing portfolio and elevate
my own online presence to establish regular writing gigs in the future.


For first time contact with an editor, I always include an article attachment.
I've enjoyed a lot more traction with this tactic. Editors receive and reject an
inordinate number of pitches, but are far more likely to respond if you've gone
to the effort of constructing an original article.


Another top tip is when I've linked to a business or website in a previous
article, I'll approach them for a guest post later down the line so they can
return the favour. This is a great way to break the ice simply by letting them
know you mentioned their insightful article.

Top takeaways
Personalise your outreach. Research the editor, the website and its target
audience, and explain why your content is suitable.
Ask yourself: Are you emailing a webmaster, or an editor? The former will be
familiar with SEO, and will scrutinise your outreach more heavily. An editor
with a journalistic background should be more receptive to content proposals.
Be yourselfâan ambitious, talented freelance writer. By mentioning your
company, you run the risk of being ignored on the basis of seeking commercial
gain.
Where possible, include an original article for the target website as part of
your email outreach.
4: Build your website's linkable assets

Successful link building means working with the internal content team to
develop linkable assets. This can be a mix of ephemeral news content,
infotainment articles and more academic, educational resources. Across our
websites, we've covered the whole spectrumâfrom a Vegas-themed HTML5
puzzle game that amassed 1,000 shares, to a serious investigation into casino
design.


One of my company's more ambitious projects was the creation of an infographic
documenting the probability of stumbling upon any given piece of image-based web
content. The luck factor prevalent in gambling was a springboard for our
tagline, "How Lucky Are You To be Reading This Infographic?" The outreach
campaign went far beyond standard infographic "directories," earning us links
from the likes of Cheezburger.com (the heart of many viral pieces),
Shortlist.com (known for their magazines in the UK), and even a Mashable.com
editor's personal blog.

Top takeaways
Focus on all media types. If you're conducting a video interview with a key
industry figure, get it transcribed and make it into a podcast to maximise your
outreach.
Formalise a comprehensive outreach plan: Find relevant twitter influencers
through Followerwonk, track down key bloggers through Google blog search and
contact industry journalists through Journalisted to cover your story.
Three "guest post" examples

Here are three examples of the aforementioned tactics being put into practice.
Naturally, I can't divulge too many leads!


1: The Bleacher Report: "Should Gambling Be Given The Boot From English
Football?"







The Bleacher Report is the world's fourth largest sports website. It thrives
on user engagement, and its article base is growing rapidly courtesy of an
advanced contributor program. Anyone can apply to write for Bleacher Report, and
after a two stage screening process, you're awarded admin rights to publish an
internal article. I was accepted into contributor program after providing
examples of my sports writing. The Bleacher Report prides itself on attributing
relevant resources, so I decided to produce an op-ed piece about gambling in
football with a link to Roulette.co.uk's internal blog posts; "Footballers in
Vegas."


2: Growth Business: "Five Reasons To Start An Affiliate Business"







Growth Business is a highly respectable business news and advice website. It
doesn't advertise guest posting opportunities, but I noticed that a range of
entrepreneurs supplied content in the comments and analysis section. On the back
of my experience in affiliate marketing, I pitched an original article "Five
Reasons To Start An Affiliate Business." Lists are an integral part of content
marketing: they're tangible, digestible and make for convenient reference
points. Since I was referencing my own websites as case studies, I was able to
embed organic links, and my contribution was duly accepted. I linked to other
affiliate marketing resources within the article body to aid reader
understanding and avoid any suspicions of commercial gain.


3: Grads Blog: "Gibraltar: An Opportunity For Graduates?"







The Grads.co.uk blog welcomes student and career-related content. It's a
growing, multipurpose website offering career advice, job listings and
interactive student engagement, so I expect the metrics to increase
significantly over time. Having graduated a little over two years ago and moved
to gambling operator hub Gibraltar, I offered a featured article on the merits
of relocating and finding employment abroad. I embedded a link to my company's
Gibraltar infographic, which included a vexel replica of the peninsula and
important stats about its economy and lifestyle. I want to cement a long-term
relationship with the editor and avoid the 'one-off' guest posting tactic so I
offer monthly contributions.

Conclusion

It's worth noting that the aforementioned tips shouldn't necessarily be
followed in order. Topic generation might be the last thing I do if I've forged
an editorial contact and secured a regular writing gig. I might publish an
internal article on a whim to establish a relevant backlink, or build a whole
guest posting campaign around a static, linkable asset.


I've written this post with reference to the gambling industry. However, it
can be applied to any difficult niche. Left-field topic generation, skilled
feature writing and tailored outreach can generate sterling results.


Finally, I want to stress that these guest posting tactics are more than a
link building exercise. They're something we tie into an overall, content
marketing strategy to drive referral traffic and social shares. After a
three-month implementation period, we recorded an overall referral traffic
increase of 45.54% on the previous three months. The majority of this came from
social websites, with overall social referral traffic increasing 247.98% in the
same period.


Do you agree with these tactics? Have you devised your own, unique outreach
plan?


I'd love to hear the Moz community's thoughts on link building for difficult
niches!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten
hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think
of it as your exclusive digest of stuff you don't have time to hunt down but
want to read!



You may view the latest post at
http://feedproxy.google.com/~r/seomoz/~3/ZrjC7qtwjrQ/case-study-whitehat-link-building-in-the-gambling-industry

You received this e-mail because you asked to be notified when new updates are
posted.
Best regards,
Build Great Backlinks
peter.clarke@designed-for-success.com

[Build Great Backlinks] TITLE

Build Great Backlinks has posted a new item, 'Quick Guide to Scaling Your
Authorship Testing with Screaming Frog'

Posted by kanejamison
Nearly all of us have used Screaming Frog to crawl websites. Many of you have
probably also used Google's Structured Data Testing Tool (formerly known as the
Rich Snippet Testing Tool) to test your authorship setup and other structured
data.


This is a quick tutorial on how to combine these two tools to check your
entire website for structured data such as Google Authorship and
Rel="Publisher", along with various types of Schema.org markup.

The concept:

Google's structured data tester uses the URL you're testing right in their own
URL. Here's an example:

When I enter this URL into the testing tool...
http://www.contentharmony.com/tools/
...the testing tool spits out this URL:
http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.contentharmony.com%2Ftools%2F&html=

We can take advantage of that URL structure to create a list of URLs we want
to test for structured data markup, and process that list through Screaming
Frog.

Why this is better than simply crawling your site to detect markup:

You could certainly crawl your site and use Screaming Frog's custom filters to
detect things like rel="author" and ?rel=author within your own code. And you
should.


This approach will tell you what Google is actually recognizing, which can
help you detect errors in implementation of authorship and other markup.


Disclaimer: I've encountered a number of times when the Structured Data
Testing Tool reported a positive result for authorship implementation, but
authorship snippets in search results were not functioning. Upon further review,
changing the implementation method resolved the issue. Also, authorship may not
be granted or present for a particular Google+ user. As a result, it's important
to note that the Structured Data Tester isn't perfect and will produce false
positives, but it will suit our need in this case, quickly testing a large
number of URLs all at once.

Getting started

You're going to need a couple things to get started:

Screaming Frog with a paid license (we'll be using custom filters which are only
available in the paid version)
One of the following: Excel 2013, URL Tools for Excel, or SEO Tools for Excel
(any of these three will allow us to encode URLs inside of Excel with a formula)
Download this quick XLSX template: Excel Template for Screaming Frog and
Snippet Tester.xlsx
The video option

This short video tutorial walks through all eight steps outlined below. If you
choose to watch the video, you can skip straight to the section titled "Four
ways to expand this concept."





Steps 1, 2, and 3: Gather your list of URLs into the Excel template

You can find the full instructions inside the Excel template, but here's the
simple 1-2-3 version of how to use the Excel template (make sure URL Tools or
SEO Tools is installed before you open this file or you'll have to fix the
formula):



Step 4: Copy all of the URLs in Column B into a .txt file

Now that Column B of your spreadsheet is filled with URLs that we'll be
crawling, copy and paste that column into a text file so that there is one URL
per line. This is the .txt file that we'll use in Screaming Frog's list mode.



Step 5: Open up Screaming Frog, switch it to list mode, and upload your file




Step 6: Set up Screaming Frog custom filters

Before we go crawling all of these URLs, it's important that we set up custom
filters to detect specific responses from the Structured Data Testing Tool.




Since we're testing authorship for this example, here are the exact pieces of
text that I'm going to tell Screaming Frog to track:

Authorship is working for this webpage.
rel=author markup has successfully established authorship for this webpage.
Page does not contain authorship markup.
Authorship is not working for this webpage.
The service you requested is currently unavailable.
Here's what the filters look like when entered into Screaming Frog:



Just to be clear, here's the explanation for each piece of text we're
tracking:

The first filter checks for text on the page confirming that authorship is set
up correctly.
The second filter reports the same information as filter 1. I'm adding both of
them for redundancy; we should see the exact same list of pages for custom
filters 1 and 2.
The third filter is to detect when the Structured Data Testing Tool reports no
authorship found on the page.
The fourth filter is to detect when broken authorship is detected. (Typically
because either the link is faulty or the Google+ user has not acknowledged the
domain in the "Contributor To" section of their profile).
The fifth filter contains the standard error text for the structured data
tester. If we see this, we'll know we should re-spider those URLs.
Here's the type of text we're detecting on the Structured Data Tester. The two
arrows point to filters 3 and 4:


Step 7: Let 'er rip

At this point we're ready to start crawling the URLs. Out of respect for
Google's servers and to avoid them disabling our ability to crawl URLs in this
manner, you might consider adjusting your crawl rate to a slower pace,
especially on large sites. You can adjust this setting in Screaming Frog by
going to Configuration > Speed, and decreasing your current settings.

Step 8: Export your results in the Custom tab

Once the crawl is finished, go to the Custom tab, select each filter that you
tested, and export the results.



Wrapping it up

That's the quick and dirty guide. Once you export each CSV, you'll want to
save them according to the filters you put in place. For example, my filter 3
was testing for pages that contained the phrase "Page does not contain
authorship markup." So, I know that anything that is exported under Filter 3 did
not return an authorship result in the Structured Data Testing Tool.

Four ways to expand this concept:
1: Use a proper scraper to pull data on multiple authors

Screaming Frog is an easy tool to do quick checks like the one described in
this tutorial, but unfortunately it can't handle true scraping tasks for us.


If you want to use this method to also pull data such as which author is being
verified for a given page, I'd recommend redesigning this concept to work in
Outwit Hub. John-Henry Scherck from SEOGadget has a great tutorial on how to use
Outwit for basic scraping tasks that you should read if you haven't used the
software before.


For the more technical among us, there are plenty of other scrapers that can
handle a task like this - the important part is understanding the process so you
can use it in your tool of choice.

2: Compare authorship tests against ranking results and estimated search volume
to find opportunities

Imagine you're ranking 3rd for a high-volume search term, and you don't have
authorship on the page. I'm willing to bet it would be worth your time to add
authorship to that page.


Use hlookups or vlookups in Excel to compare data from three tabs: rankings,
estimated search volume, and whether or not authorship is present on the page.
It will take some data manipulation, but in the end you should be able to create
a Pivot Table that filters out pages with authorship already, and sorts the
pages by estimated search volume and current ranking.


Note: I'm not suggesting you add Authorship to everythingânot every page
should be attributed to an authorâe-commerce product pages, for example.

3: Use this method to test for other structured markup besides authorship

The Structured Data Testing Tool goes far beyond just authorship. Here's a
short list of other structured markup you can test:

E-commerce product reviews and pricing
Rel Publisher
Event Listings
Review and price markup on App Listings
Music Snippets
Recipes
Business Reviews
Just about anything referencing schema.org, data-vocabulary.org, and similar
markup.
4: Blend this idea with Screaming Frog's other capabilities

There's a ton of ways to use Screaming Frog. Aichlee Bushnell at SEER did a
great job of cataloging 55+ Ways To Use Screaming Frog. Go check out that post
and I'm sure you can come up with additional ways to spin this concept into
something useful.

Not to end on a dull note, but a couple comments on troubleshooting:
If you're having issues, the first thing to do is manually test the URLs you're
submitting and make sure there weren't any issues caused during the Excel steps.
You can also add "Invalid URL or page not found." as one of your custom filters
to make sure that the page is loading correctly.
If you're working with a large number of URLs, try turning down Screaming
Frog's crawl rate to something more polite, just in case you're querying Google
too much in too short a period of time.
When you first open the Excel template, the formula may accidentally change
depending on whether or not you have URL Tools or SEO Tools installed already.
Read the instructions on the first page to find the correct formula to replace
it with.

Let me know any other issues in the comments and I'll do my best to help!Sign
up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest
pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it
as your exclusive digest of stuff you don't have time to hunt down but want to
read!



You may view the latest post at
http://feedproxy.google.com/~r/seomoz/~3/3m34smnf22c/scaling-authorship-testing-with-screaming-frog

You received this e-mail because you asked to be notified when new updates are
posted.
Best regards,
Build Great Backlinks
peter.clarke@designed-for-success.com

Monday 28 October 2013

[Build Great Backlinks] TITLE

Build Great Backlinks has posted a new item, 'Search News from the Future'

Posted by Reinhart
Citizens of Moz, I come to you at a most desperate hour. I've just returned
from London, Distilled's international headquarters, and I've been patiently
awaiting this moment to share some potentially niche-shattering news with you
all.


I don't quite know where to begin, so I'll just say it: You see, the stories
are all true. Will Critchlow is a wizard. I know, it's common knowledge that
nearly all Englishmen are wizards, I've seen Harry Potter too. But Mr. Critchlow
is a wizard with a most peculiar and exciting gift: that of clairvoyance. He can
see the future!




And no, I don't mean in a Steve Jobs/Carl Sagan/George Orwell futurist kind of
way either. I mean he quite literally has a translucent, viridian ball of
crystal sitting on his desk that divines that which has yet to transpire! I
wouldn't have thought anything of the object upon first glance, but one night I
came back to the office to grab my misplaced jacket to low mutterings, frantic
typing, and wisps of smoke coming from the other side of the room. I dove into
the bean bag room so as not to draw his attention and waited patiently, shaking
with dread but with a fully piqued curiosity.


I couldn't make out what he was chanting and I don't think I would have been
able to translate the Latin anyway. After about 30 minutes of this I heard him
pack up his things and leave. I'm normally more of the craven type when it comes
to adventure, but something that night pressed me to snoop around my boss's desk
for the truth.


The smoke and emerald glow dissipated as I shuffled some papers around. The
smell of ozone lingered in the air. Nothing looked too out of the ordinary: the
latest issue of Inc. Magazine, a Post-it note with a hastily scrawled and
circled "Fire Phil Nottingham: Oct 31"... wait... this news clip read... 2016?
Maybe he was just tired and mistakâ 2020?! What was I looking at here?!


What I'm about to reveal may shock or even scare some readers, but I believe
it is essential that the Moz community hear it nevertheless. I may lose my
jobânay, I may be turned into a toad with a dreadful cockney
accentâbut it will have all been worth it to bring this knowledge to you
all. My interpretations may be shaky at best, but the headlines were as clear as
day: These are digital marketing news items from the future!


You may never get a better chance to peek behind the tapestries of time as you
do now. So read on, friends, and be brave.

Term "mobile" removed from Analytics, Google's vocabulary

MOUNTAIN VIEW, CA â April 14, 2015 â A term commonly used by
webmasters, digital marketers and industry analysts may not be so common after
today. Over the weekend, Google removed the term "mobile" from all of its web
products, including Webmaster Tools, Google Analytics, and the company's AdWords
tool set.


"Mobile has been a deprecated term for some time now," the search giant
explained in a corresponding blog post. "The lines between where and when we
view our various screens have been blurred beyond parsability. All web-based
content can be viewed on any device these days and thus it makes little sense to
refer to all non-traditional desktop visitors as 'mobile.' "


The web is very close to becoming truly device-neutral largely thanks to
thoughtful webmasters, CMS development teams and device manufacturers who have
all come together to deal with the issue of rendering content from multiple
angles. Data on device type, screen size, and other metrics is still readily
available throughout Google's suite of webmaster tools. [...]

Voice searches now constitute 28% of all queries

AUSTIN, TX â June 29, 2020 â Search engine corporations such as
Google, Microsoft, and Yahoo have traditionally held on tight to their data,
offering limited info on global search trends, but a recent study conducted by
the University of Texas has unearthed compelling evidence that shows almost a
third of all search queries are now conducted via voice search.


The nation's only "Professor of Search," Dr. Pete Meyers of the University of
Texas explains the results of his institution's study:


"They called me mad back in 2013, but voice searches now constitute a huge
chunk of the search pie. Several years ago we would have found it laughable to
be walking down the street talking to our devices, let alone talking to our
devices within our home. But with the advancement of voice recognition software
and the nearly ubiquitous nature of the hardware to back it up, today we're
estimating that voice search makes up almost a third of all search queries, and
that number seems to be on the rise."


A few of the major contributing factors to the ascendancy of voice search
include web-enabled automobiles, home appliances, [...]

Traditional television advertising revenues wane as new year begins; YouTube,
Twitter and Facebook post record annual reports

NEW YORK, NY â January 1, 2019 â Google's video platform, YouTube
(GOOG) along with social networks Twitter (TWTR) and Facebook (FB), posted
record gains in 2018 as social and video advertising revenues shattered
forecasts and industry expectations. Analysts speculate that this was due in no
small part to the transference of advertising spends on traditional television
media. When FB and TWTR first hit the stock market, many buyers felt the social
networks needed to prove themselves in the competitive world of media
advertising, but as the multi-billion dollar industry of traditional television
advertising continues to crumble amid stiff competition from a la carte
alternatives like Netflix and Amazon, more marketing budgets are now trickling
down to companies such as YouTube, Facebook, Twitter, and Google's AdWords
platform.
"Television is evolving and has been for some time," says Will Critchlow,
founder and president of the world's foremost digital marketing agency,
Distilled International. "Companies want to get their products in front of
consumers, and those consumers are now watching television online. They're doing
everything online." [...]

Netflix introduces video, text advertisements for streaming content

LOS GATOS, CA â January 16, 2019 â Earlier this month we saw
reports that television advertising revenues were waning in the new year. Today
we can report that some of those dollars will most certainly be spent on
Netflix's streaming video platform. The company issued a press release this
morning indicating that the company, for the first time in its history, will now
display advertisements before many of their most popular original programs such
as Arrested Development, Orange is the New Black, and the much-anticipated final
season of House of Cards. Advertisements will be similar to those seen on
YouTube, Hulu, and other video sites.
"With the amount of quality content and general media access we're collecting,
we have no choice but to find revenue from other sources if we want to remain at
the $9.99 price point we set in 2015," Netflix CEO Reed Hastings said on an
investor conference call yesterday. [...]

Google cracks down on fake, purchased +1s

MOUNTAIN VIEW, CA â February 1, 2017 â For the first several years
of the social platform's life, Google Plus seemed a joke to many. Comparisons
were made to MySpace and other defunct social platforms, and G+ was often called
"a graveyard" as it faced competition from the already-established Facebook. But
since that time, the network has shown some real staying power with the full
faith and credit of Google Inc. behind it. To that end, in late 2016 we reported
on Google's announcement that plus ones, Google's own brand of "Likes," would
help determine the order in which documents appeared in its search engine
results pages. This move forced webmasters everywhere, for big and small
companies alike, to reconsider the social platform for conducting regular
business. Since then, various scams have been created to generate fake or paid
"+1s" for sites who want quick and easy exposure in Google's search engine.
While this practice has been effective for some, it is not sitting at all well
with the search giant.


Today, Google announced a crack-down on those sites which it has determined to
have been generating fake +1s. The process should be easy enough for Google as
it has access to all of its users' account data and history. One well known
Google representative, speaking on condition of anonymity, cut straight to the
point, asking "What were they thinking?" in reference to marketers who've been
attempting to game Google's algorithm. "As if we haven't been aware of fake
Google plus accounts since Plus's inception?... [...]

Panda and Penguin now refresh daily

MOUNTAIN VIEW, CA â July 22, 2016 â Five years ago, Google
launched a pair of systems designed to keep poor content out of its search
engine's results and combat questionable citation building tactics. The former
is known as the "Panda" update and the latter, "Penguin." Until this week, the
two algorithms have been updated on unpredictable schedules based on when the
massive amounts of data required to make proper determinations about the quality
of a website and its internet-wide citations were parsed. This would result in
periodic "refresh" days, where webmasters who engaged in deceptive marketing
practices would brace themselves for potential losses of traffic to their
webpages. These updates would traditionally occur four or five times per year.
Last Monday, Google announced that they've dedicated additional resources to
these systems and are now able to parse the same data sets many times faster
than before, meaning that these updates will now essentially occur in real time.
"Our users don't want clean and relevant answers three months from now, they
need them immediately," declared a Google representative at an industry
conference in San Diego... [...]

Google removes "organic keywords" tab from analytics

MOUNTAIN VIEW, CA â October 31, 2020 â [...] and for those who
don't remember, Google used to give webmasters access to what was known as
"keyword data," allowing them to make better decisions about their sites'
development and what their users' intent might be when visiting. In the fall of
2013, Google denied access to almost all of this organic data by encrypting all
searches generated through Google.com. Today, Google took it a step further and
completely removed the "Organic Keywords" tab from its popular web analytics
program.


"We've been meaning to do this for some time now," said a senior Google
representative, speaking on condition of anonymity. "We're very concerned about
our users' privacy, and that's why we started to deny access to sensitive data
such as search queries. We know our users don't want webmasters knowing what
they're searching for, and we want to respect that."


Later in the same conversation, the same Googler said with a grin and a
chortle, "There'll always be keyword data in AdWords." Adopting a sing-song
tone, one might have quoted Arrested Development's timeless one-liner, "There's
always money in the banana stand."


In other news, Google added a new tab to analytics titled "ASL," which
includes less-sensitive data about users such as age, sex, location, weight, and
sexual preference [...]

Cutts: Given today's technology, page speed a "deprecated metric" for
determining site quality

MOUNTAIN VIEW, CA â October 31, 2022 â For as long as there have
been websites, there have been slow websites. In past generations, particularly
in the 2000s and 2010s, internet users have been frustrated with delay times and
unresponsive pages. Because of this, Google has made several attempts to help
webmasters create more efficient sites, and has also taken measures to ensure
that particularly slow sites do not register as frequently in their search
results. But technology has come a long way since the days of 4G and 5G wireless
networks.
Matt Cutts, the long-time head of Google's anti-spam team, addressed a group
of fledgling digital marketers this weekend at SMX 2022 saying, "Given the
bandwidth speeds of today's internet service providers, we're no longer using
page speed as an indicator of 'site quality,' as this metric is now almost
completely deprecated. In the past it made sense to devalue a site that took 10
to 15 seconds to load, and thus provided a negative experience for Googlers. But
with today's 7G Quantum LTE-X technology, the difference between page load times
are negligible, almost instant and ultimately irrelevant.
"Does anyone else remember 4G LTE? How about 2G? [Expletive], I'm old."


So, there you have it. That's all I found in Will's office that fateful night.
Leave a comment and wish me good tidings if you be so bold. Though as a
soon-to-be-toad, I may have a difficult time responding to your queries.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten
hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think
of it as your exclusive digest of stuff you don't have time to hunt down but
want to read!



You may view the latest post at
http://feedproxy.google.com/~r/seomoz/~3/6Vi5EaivkLY/search-news-from-the-future

You received this e-mail because you asked to be notified when new updates are
posted.
Best regards,
Build Great Backlinks
peter.clarke@designed-for-success.com

Friday 25 October 2013

[Build Great Backlinks] TITLE

Build Great Backlinks has posted a new item, 'Link Reclamation - Whiteboard
Friday'

Posted by RossHudgens
The one thing any good marketer appreciates more than a mention of their brand
is a link back to their own domains. For a variety of reasons, some
authorsâno matter how well meaning they areâdon't include that link
with the mention. With the right tools and a little diplomacy, these are some of
the easiest opportunities to earn valuable links back to our own properties, and
in today's Whiteboard Friday, Ross Hudgens gives us several great places to
start.






Ross Hudgens - Link Reclamation - Whiteboard Friday





For reference, here's a still of this week's whiteboard:



Video Transcription



Hey Moz fans, welcome to Whiteboard Friday. My name is Ross Hudgens, and I
work for Siege Media, a content market agency/link building and link development
agency. Today I'm going to talk a little bit about link reclamation, one of my
favorite subjects.


Link reclamation, if you're not familiar with it, is the task of finding
opportunities on the web where you've been linked to or you've been mentioned,
but haven't been properly linked to for whatever reason. Maybe the webmaster
messed something up, maybe they just didn't find the right URL, maybe they
misspelled your domain name, all kinds of reasons inform why someone might do
that incorrectly.


For the purposes of hopefully getting more traffic and also getting those
links that we like that potentially can add a lot of value to our domain and
help us rank for keywords we want to, it makes sense to do link reclamation. And
even more so, I really like it because the conversion is so high. Because
someone has already mentioned you, you get a really high conversion on your
request, because normally they already have a positive brand sentiment for you.


So one thing to think of in general for link reclamation, if you think about
it as a main concept, is it most frequently occurs when your brand has an
experience outside of the digital world or disconnected from your main domain.
So it could still be digital, but disconnected. So for example, if you're
Target, you probably have a lot of experiences in store where people refer to
that, and it doesn't necessarily make sense to refer to your domain.


Similarly, if you have a YouTube video, it might not make as much sense. But
there are still opportunities to ask for a citation back to your domain in those
instances that you might not have totally realized before that. That is possible
with these kinds of opportunities.


So there are a lot of ways to go about doing that. I'm going to dive into a
few of them in this strategy section. A big one for big brands is brand
misspellings. Frequently, people will have webmaster error. For whatever reason,
they will misspell your domain name.


For example, if you're a giant company, Pepsi or something like that, you
could look for PEPS.com, and likely you're going to find some instances where
people have linked to that thinking it's Pepsi. So you can go to them and say,
"Hey, please fix your error on your site, how you've linked to us. It will help
your users. It will help us. It will be a great thing all around." They will
generally do that.


So a good process for finding those and also seeing if there is actually
link volume around different misspellings is detailed on this link by John Henry
Scherck, who I might have mispronounced his name. But that URL, you can find
that process, which I can't really get into as much as I would like to here. But
I definitely recommend you go to that post and check it out.


The most powerful ongoing process is simple brand monitoring. So Moz's tool,
Fresh Web Explorer is especially powerful for that. You can use advanced
operators to see where your brand has been mentioned but you haven't been linked
to. I think it's negative LD, or you can see in the advanced operators dropdown.
But that's extremely powerful to just monitor and see who's mentioning you and
all those things as an ongoing thing.


Similarly, Google by date, so Google has an advanced search setting where
you can search by 24 hours, a week, a month, things like that, and it will give
you the opportunity to see recent mentions that sometimes offer a nice
supplement to the fresh web index. So it's always good to get multiple looks at
the web, Fresh Web Explorer I believe uses RSS feeds specifically. Google, by
date has, of course, their own comprehensive index. So it's nice to get a blend
of both for finding mentions where people have talked about you, but not linked
to you.


So another good one is allinURL/tag/brand. So insert brand. If you're Pepsi,
allin/tag/Pepsi. So these are instances where people think you're significant
enough to actually link to you, significant enough to actually mention you. But
sometimes they haven't linked to you. They've created a tag for you because
they've talked about you in some way in a post. So there's a lot of sometimes
opportunity to get links on those kinds of pages.


So it's kind of a cool way to easily use Google search engine to find pages
that do those kinds of things. So you can just search by that, scrape the
results, dump it into like a spreadsheet, and you can quickly find who hasn't
linked to you by doing that kind of process.


Short form text is a kind of a unique thing. It's any kind of asset that is
short, like a definition, a stat, anything like that, that might have been
mentioned or stolen without attribution. So examples of that: One stat that is
frequently referred to is every second of page load time is a seven percent dip
in conversation rate.


So if you have that stat and you actually were the source of that stat, you
could track it in Fresh Web Explorer and Google search, these same kind of
things just like you do your brand, see who's mentioned it, and say, "Please
attribute us properly with that stat."


Other examples I like pointing to frequently is Content Marketing Institute.
They have a "what is the definition of content marketing," and that has been
stolen like 125,000 times. There is this huge opportunity where people are just
taking that, not linking to it, not attributing it properly just because they
are lazy or what have you. If you go out and reach out to those webmasters, you
can easily get links back, because most of the people will panic in that moment
and be like, "Oh, it was just a honest mistake," and link to you as they should
have.


So sometimes you might have that asset, sometimes you might not, but it's
also something to think about and have in the back of your mind when you do that
kind of data analysis that might come with an interesting stat that people might
want to take.


Reverse image search, so you have a logo or a set of logos, maybe you have
interesting assets on your site. For example, if you're National Geographic, you
might have images that everyone takes. You can start monitoring those images,
see who's taken them without attribution, and get links back by doing requests
of, "Please attribute properly."


There are tools like Image Raider, which I know does that. I haven't used it
extensively, but it's pretty good. Similarly you can use tools like TinEye and
also just reverse image search on Google to find those mentions. Of course, an
explicit and powerful one is your own logos. So you can see who has mentioned
your logo, but not linked to you in the same kind of ways.


Also, just as a tangent from this, Screaming Frog is a really powerful tool.
If you haven't heard of it, it's a good way to dump a lot of links in there,
because sometimes you might see this as a negative process if you go to a lot of
these links and you've already been linked to. So you can use Screaming Frog as
a custom filter and find exactly who hasn't linked to you by setting an exclude
to your domain name. So it can make this process more efficient for you and less
frustrating depending on the domain.


YouTube videos, so a lot of people make video assets that are hosted on
YouTube or some other platforms, but they don't necessarily get linked to for
whatever reason. Again, it's something separate from your main domain. So
because of that disconnect, they don't probably link to it a lot of time. Or
they're going to link to the YouTube video, but it doesn't mean that they're not
willing to link to you as a business if you request it.


So a good way to find that is either dump the YouTube URL in Open Site
Explorer or whatever your link management tool is and see what the data is
behind that. Or just look at the dashboard. YouTube specifically has an embed
dashboard, so you can see where people have embedded that video and not linked
back to you, or hopefully they have already linked to you, of course. But you
can capture that gap where they haven't linked to you and they should have
because it has a slight disconnect from your main domain as a YouTube video.


Links and tweets and +1s. This is more of an advanced thing. If you have a
pretty powerful Twitter account or a Google+ account, you can actually take your
archive from Twitter and create a spreadsheet essentially, dump all of your
tweets in there. Use a tool like Screaming Frog or use the Moz API for example.
Look at the data and see if any of those have been linked to and then see if
there is an opportunity to actually reach out and say, "Hey, this tweet, you
looked to my Twitter account. I'd really appreciate it if you link to my domain
instead."


A similar thing can be done for Google+. You have to do a site search for
your Google+ URL, and they won't get all of your URLs. But if you scrape that
and do the same kind of process, you might find places where people have linked
to maybe an interesting tweet by you or some interesting quote you gave or
something like that, where you might have not been linked to that you might have
wanted to and do a request like that.


So finally, moving links to primary domain. So what that means is, if you're
a big brand, sometimes you have multiple owned properties across the web, but
not all of them are you primary KPI for SEO purposes. So I've worked with
companies who have two main domains, because they can't make up their mind
really, and one is very clearly their SEO domain where they want to rank for
stuff. It's not totally clear which one in the mind of consumers is the primary
one.


So something that you can do and that I did in that instance is you go to
the one that they don't really have reason to rank for anything and ask them to
link to the other one because maybe you're changing focus or that's how you
would evangelize it to them. Most of the time they'll do it because they like
you and they're already linking to you and things like that. You'll get the more
direct link power from those kinds of links.


So, when you're doing this process, it's not as simple as asking every
single person to link to you. There's risk involved if you do this incorrectly.
So it's definitely be delicate and don't step on the wrong toes, because when
you do this, there'll be sites if you're a big brand that people cover you all
the time. Occasionally people will write about you, but maybe in that single
post they don't link to you, but in the previous ten they linked to you.


So in those kinds of instances, just let it go. You don't need a link in
every single post that you get. Potentially it can be kind of put offish to that
person that covers you all the time, and you don't want to lose that good press
by burning bridges by being over aggressive as a SEO. So in those kinds of
instances, verify that you have links already, that they cover you all the time
and just let it go if that's the kind of instance where they just mistakenly
didn't link to you that one time.


Similarly, don't step on PR. It's kind of a similar idea. When you're doing
this process, reaching out to big people that are covering you, it's frequently
newspapers don't link cite properly so you have to do that kind of outreach.
This is where it's a high value kind of campaign when you hit those big
newspapers. But it's also at risk with PR if you step on them and they don't
like what you're doing. They hate that you are talking to their contacts
directly. So verify that this is an okay thing before you start doing this
outreach with your PR team


Ask for links where you should be linked. What I mean by that is sometimes
you'll get mentions in articles in jest. Maybe they'll be talking about general
soda trends, and they'll randomly jitter off five brands that are sodas, Pepsi,
Coca-Cola, Sprite, something like that, all in a sentence. So you could go and
say, "Hey link to Pepsi," but there are four other companies there that they
would also have to link to.


You're just kind of a one off thing in the article. You don't totally make
sense to be linked to in those kinds of situations. So it's an example of a
non-harmonious kind of event where you shouldn't ask for a link because it
doesn't make sense necessarily. So in those kinds of situations, skip it. You
want places where it gets a positive brand sentiment. You should have been
linked to in the article, but they didn't


So if you can really say in your mind it adds value to the article being
linked here, then that's when you should do outreach for this kind of link
reclamation. If not, you potentially could burn bridges, step on people's toes,
and put yourself at risk for future coverage that might have been more powerful
had you not actually ruined your relationship with that press person.


And finally, if you're doing this at scale and you have a lot of people
mentioning you, you're a huge brand, you want to use tools that make this more
efficient. So one of the problems is sometimes you'll have no idea if someone
has linked to you before, unless you have a process put in place.


So there are a lot of link management tools where you can dump all of your
links into it, and it will automatically have a popup in the corner saying that
you have a link from this domain. So if you have a lot of people doing outreach
and doing link reclamation, you can see, "Hey, I've already gotten a link from
this domain or multiple links from this domain. I don't need to do this outreach
again."


Otherwise it's kind of time intensive, trying to remember who's linked to
you, who hasn't link to you, whether or not you should do that outreach, and all
of those things. So doing that on top of all of these things I think is really
powerful.


That's pretty much it, but I hope you guys see this as a valuable thing that
I do. It's really powerful, especially the bigger your brand, the more powerful
it's going to be. But I think any business on the web today who's hopefully
building a brand, because that's what it takes to rank in Google and today's
search results, is going to get occasional mentions in any of these instances
that you can potentially capitalize on that were missed where people don't link
to you.


So I hope this was valuable and have a good one.




Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten
hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think
of it as your exclusive digest of stuff you don't have time to hunt down but
want to read!



You may view the latest post at
http://feedproxy.google.com/~r/seomoz/~3/RWkx7nirqS4/link-reclamation-whiteboard-friday

You received this e-mail because you asked to be notified when new updates are
posted.
Best regards,
Build Great Backlinks
peter.clarke@designed-for-success.com

Thursday 24 October 2013

[Build Great Backlinks] TITLE

Build Great Backlinks has posted a new item, 'Take the 2013 Moz Industry Survey:
Your Peers Need You!'

Posted by Cyrus-Shepard
We're very excited to announce the 2013 Moz Industry Survey is ready to take.
This is the fourth edition of the survey, which started in 2008 as the SEO
Industry Survey and only ran every two years. So much has changed since the last
survey that we thought it was important to run it anew in order to gain fresh
insights. Some of what we hope to learn and share:

Who works in inbound marketing and SEO today?
What tactics and tools are most popular?
Where are marketers spending their dollars?
What does the future of the industry hold?

This year's survey was redesigned to be easier and only take 5-10 minutes.
When the results are in we'll share the data freely with you, our partners, and
the rest of the world.




Prizes

It wouldn't be the Industry Survey without a few excellent prizes thrown in as
an added incentive.


This year we've upped the game with prizes we feel are both exciting and
perfect for the busy inbound marketer. To see the full sweepstakes terms and
rules, go to our sweepstakes rules page. The winners will be announced by June
4th. Follow us on Twitter to stay up to date.

Grand Prize: Attend MozCon 2014 in Seattle
+ Flight
+ Hotel
+ Lunch with an industry expert



Come see us Mozzers in Seattle! The Grand Prize includes one ticket to MozCon
2014 plus airfare and accommodations. We'll also arrange a one-on-one lunch for
you with an industry expert.

2 First Prizes: iPad 2

We're giving away two separate iPad 2s.



10 Second Prizes: $100 Amazon.com gift cards

Yep, 10 lucky people will win $100 Amazon.com gift cards. Why not buy yourself
a nice book?






Why the survey is important

By comparing answers and predictions from one year to the next, we can spot
trends and gain insight not easily reported through any other source. This is
our best chance to understand exactly where the future of our industry is
headed. Some of the things we hope to learn:

Demographics: Who is practicing inbound marketing and SEO today? Where do we
work and live?
Agencies vs. in-house vs. other: How are agencies growing? What's the average
size? Who is doing inbound marketing on their own?
Tactics and strategies: What's working for people today? How have strategies
and tactics evolved?
Tools and technology: What are marketers using to discover opportunities,
promote themselves, and measure the results?
Budget and spending: What tools and platforms are marketers investing in?

Every year the Industry Survey delivers new insights and surprises. For
example, the chart below (from the 2012 survey) lists average reported salary by
role. Will it change in 2013?






2012 SEO Industry Survey



Thanks to our partners

Huge thanks to our partners who are helping to spread the word and encouraging
their audience to participate in the survey. We'd especially like to give
special recognition to Search Engine Land, Buffer, aimClear, SEOverflow,
CopyBlogger, Econsultancy, Content Marketing Institute, TopRank Marketing,
MarketingProfs, HootSuite and Entreprenuer.com, Distilled and Hubspot.





Sharing is caring

The number of people who take the survey is very important! The more people
who take the survey, the better and more accurate the data will be, and the more
insight we can share with the industry.


So please share with your co-workers. Share on social media. Share with your
email lists. You can use the buttons below this post to get you started, but
remember to take the survey first!



Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten
hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think
of it as your exclusive digest of stuff you don't have time to hunt down but
want to read!



You may view the latest post at
http://feedproxy.google.com/~r/seomoz/~3/gIGyPxMeE2s/take-2013-industry-survey

You received this e-mail because you asked to be notified when new updates are
posted.
Best regards,
Build Great Backlinks
peter.clarke@designed-for-success.com

[Build Great Backlinks] TITLE

Build Great Backlinks has posted a new item, 'Hummingbird Unleashed'

Posted by gfiorelli1
Sometimes I think that us SEOs could be wonderful characters for a Woody Allen
movie: We are stressed, nervous, paranoid, we have a tendency for sudden changes
of mood...okay, maybe I am exaggerating a little bit, but that's how we tend to
(over)react whenever Google announces something.Cases like this webmaster, who
is desperately thinking he was penalized by Hummingbird, are not uncommon.




One thing that doesn't help is the lack of clarity coming from Google, which
not only never mentions Hummingbird in any official document (for example, in
the post of its 15th anniversary), but has also shied away from details of this
epochal update in the "off-the-record" declarations of Amit Singhal. In fact, in
some ways those statements partly contributed to the confusion.
When Google announces an updateâespecially one like
Hummingbirdâthe best thing to do is to avoid trying to immediately
understand what it really is based on intuition alone. It is better to wait
until the dust falls to the ground, recover the original documents, examine
those related to them (and any variants), take the time to see the update in
action, calmly investigate, and then after all that try to find the most
plausible answers.
This method is not scientific (and therefore the answers can't be defined as
"surely correct"), it is philological, and when it comes to Google and its
updates, I consider it a great method to use.
The original documents are the story for the press of the event during which
Google announced Hummingbird, and the FAQ that Danny Sullivan published
immediately after the event, which makes direct reference to what Amit Singhal
said.
Related documents are the patents that probably underlie Hummingbird, and the
observations that experts like Bill Slawski, Ammon Johns, Rand Fishkin, Aaron
Bradley and others have derived.
This post is the result of my study of those documents and field observations.

Why did Amit Singhal mix apples with oranges?

When announcing Hummingbird, Amit Singhal said that it wasn't since Caffeine
in 2010 that the Google Algorithm was updated so deeply.


The problem is that Caffeine wasn't an algorithmic change; it was an
infrastructural change.


Caffeine's purpose, in fact, was to optimize the indexation of the billions of
Internet documents Google crawls, presenting a richer, bigger, and fresher pool
of results to the users.


Instead, Hummingbird's objective is not a newer optimization of the indexation
process, but to better understand the users' intent when searching, thereby
offering the most relevant results to them.


Nevertheless, we can affirm that Hummingbird is also an infrastructural
update, as it governs the more than 200 elements that make up Google's
algorithm.




The (maybe unconscious) association Amit Singhal created between Caffeine and
Hummingbird should tell us:

That Hummingbird would not be here if Caffeine wasn't deployed in 2010, and
hence it should be considered an evolution of Google Search, and not a
revolution.
Moreover, that Hummingbird should be considered Google's most ambitious attempt
to solve all the algorithmic issues that Caffeine caused.

Let me explain this last point.


Caffeine, quitting the so-called "Sand Box," caused the SERPs to be flooded
with poor-quality results.


Google reacted by creating "patches" like Panda, Penguin, and the exact-match
domain (EMD) updates, among others.


But these updates, so effective in what we define as middle- and head-tail
queries, were not so effective for a type of query thatâmainly because of
the fast adoption of mobile search by the usersâmore and more people have
begun to use: conversational long tail queries, or those that Amit Singhal has
defined as "verbose queries."


The evolution of natural language recognition by Google, the improved ability
to disambiguate entities and concepts through technology inherited from Metaweb
and improved with Knowledge Graph, and the huge improvements made in the SERPs'
personalized customization have given Google the theoretical and practical tools
not only for solving the problem of long-tail queries, but also for giving a
fresh start to the evolution of Google Search.


That is the backstory that explains what Amit Singhal told about Hummingbird,
paraphrased here by Danny Sullivan:


[Hummingbird] Gave us an opportunity [...] to take synonyms and knowledge
graph and other things Google has been doing to understand meaning to rethink
how we can use the power of all these things to combine meaning and predict how
to match your query to the document in terms of what the query is really wanting
and are the connections available in the documents. and not just random
coincidence that could be the case in early search engines.

How does Hummingbird work?

"To take synonyms and knowledge graph and other things..."


Google has been working with synonyms for a long time. If we look at the
timeline Google itself shared in its 15th anniversary post, it has used them
since 2002, even though we can also tell that disambiguation (meant as
orthographic analysis of the queries) has been applied since 2001.




Last year Vanessa Fox wrote "Is Google's Synonym Matching Increasing?..." on
Search Engine Land.


Reading that post and seeing the examples presented, it is clear that synonyms
were already used by Googleâin connection with the user intent underlying
the queryâin order to broaden the query and rewrite it to offer the best
results to the users.


That same post, though, shows us why only using a thesaurus of synonyms or
relying on the knowledge of the highly ranked queries was not enough to assure
relevant SERPs (see how Vanessa points out how Google doesn't consider "dogs"
pets in the query "pet adoption," but does consider "cats").


Amit Singhal, in this old patent, was also conscious that only relying on
synonyms was not a perfect solution, because two words may be synonyms and may
not be so depending on the context they are used (i.e.: "coche" and
"autom³vil" both mean "car" in Spanish, but "carro" only means "car" in
Latin American Spanish, meaning "wagon" in Spain).


Therefore, in order to deliver the best results possible using semantic
search, what Google needed to understand better, easier, and faster was context.
Hummingbird is how Google solved that need.





Synonyms remain essential; Amit Singhal confirmed that in the post-event talk
with Danny Sullivan. How they are used now has been described by Bill Slawski in
this post, where he dissects the Synonym identification based on co-occurring
terms patent.


That patent, then is also based on the concept of "search entities," which I
described in my last post here on Moz, when talking about personalized search.


Speaking literally, words are not "things" themselves but the verbal
representation of things, and search entities are how Google objectifies words
into concepts. An object may have a relationship with others that may change
depending on the context in which they are used together. In this sense, words
are treated like people, cities, books, and all the other named entities usually
related to the Knowledge Graph.


The mechanisms Google uses in identifying search entities are especially
important in disambiguating the different potential meanings of a word, and
thereby refining the information retrieval accordingly to a "probability score."


This technique is not so different from what the Knowledge Graph does when
disambiguating, for instance, Saint Peter the Apostle from Saint Peter the
Basilica or Saint Peter the city in Minnesota.


Finally, there is a third concept playing an explicit role in what could be
the "Hummingbird patent:" co-occurrences.


Integrating these three elements, Google now is (in theory) able:

To better understand the intent of a query;
To broaden the pool of web documents that may answer that query;
To simplify how it delivers information, because if query A, query B, and query
C substantively mean the same thing, Google doesn't need to propose three
different SERPs, but just one;
To offer a better search experience, because expanding the query and better
understanding the relationships between search entities (also based on
direct/indirect personalization elements), Google can now offer results that
have a higher probability of satisfying the needs of the user.
As a consequence, Google may present better SERPs also in terms of better ads,
because in 99% of the cases, verbose queries were not presenting ads in their
SERPs before Hummingbird.





Maybe Hummingbird could have solved Fred Astaire and Ginger Rogers speaking
issues...

90% of the queries affected, seriously?

Many SEOs have questioned the fact that Hummingbird has affected the 90% of
all queries for the simple reason they didn't notice any change in traffic and
rankings.


Apart from the fact that the SERPs were in constant turmoil between the end of
August and the first half of September, during which time Hummingbird first saw
the light (though it could just be a coincidence, quite an opportune one
indeed), the typical query that Hummingbird targets is the conversational one
(e.g.: "What is the best pizzeria to eat at close to Piazza del Popolo e via del
Corso?"), a query that usually is not tracked by us SEOs (well, apart from Dr.
Pete, maybe).


Moreover, Hummingbird is about queries, not keywords (much less long-tail
ones), as was so well explained by Ammon Johns in his post "Hummingbird - The
opposite of long-tail search." For that reason, tracking long-tail rankings as a
metric of the impact of Hummingbird is totally wrong.


Finally, Hummingbird has not meant the extinction of all the classic ranking
factors, but is instead a new framework set upon them. If a site was both
authoritative and relevant for a query, it still will be ranking as well as it
was before Hummingbird.


So, which sites got hit? Probably those sites that were relying just on very
long tail keyword-optimized pages, but had no or very low authority. Therefore,
as Rand said in his latest Whiteboard Friday, now it is far more convenient to
create better linkable/shareable content, which also semantically relates to
long-tail keywords, than it is to create thousands of long tail-based pages with
poor or no quality or utility.

If Hummingbird is a shift to semantic SEO, does that mean that using Schema.org
will make my site rank better?

One of the myths that spread very fast when Hummingbird was announced was that
it is heavily using structured data as a main factor.


Although it is true that for some months now Google has stressed the
importance of structured data (for example, dedicating a section to it in Google
Webmaster Tools), considering Schema.org as the magic solution is not correct.
It is an example of how us SEOs sometimes confuse the means with the purpose.




What we need to do is offer Google easily understandable context for the
topics around which we have created a page, and structured data are helpful in
this respect. By themselves, however, they are not enough. As mentioned before,
if a page is not considered authoritative (thanks to external links and
mentions), it most likely will not have enough strength for ranking well,
especially now that long-tail queries are simplified by Hummingbird.

Is Hummingbird related to the increased presence of the Knowledge Graph and
Answers Cards?

Many people came up with the idea that Hummingbird is the translation of the
Knowledge Graph to the classic Google Search, and that it has a direct
connection with the proliferation of the Answer Cards. This theory led to some
very angry posts ranting against the "scraper" nature of Google.


This is most likely due to the fact that Hummingbird was announced alongside
new features of Knowledge Graph, but there is no evident relationship between
Hummingbird and Knowledge Graph.


What many have thought as being a cause (Hummingbird causing more Knowledge
Graph and Answer Cards, hence being the same) is most probably a simple
correlation.


Hummingbird substantially simplified verbose queries into less verbose ones,
the latter of which are sometimes complemented with the constantly expanding
Knowledge Graph. For that reason, we see a greater number of SERPs presenting
Knowledge Graph elements and Answer Cards.


That said, the philosophy behind Hummingbird and the Knowledge Graph is the
same, moving from strings to things.

Is Hummingbird strongly based on the Knowledge Base?

The Knowledge Base is potent and pervasive in how Google works, but reducing
Hummingbird to just the Knowledge Base would be simplistic.




As we saw, Hummingbird relies on several elements, the Knowledge Base being
one of them, especially in all queries with personalization (which should be
considered a pervasive layer that affects the algorithm).


If Hummingbird was heavily relying on the Knowledge Base, without
complementing it with other factors, we could fall into the issues that Amit
Singhal was struggling with in the earlier patent about synonyms.

Does Hummingbird mean the end of the link graph?

No. PageRank and link-related elements of the algorithm are still alive and
kicking. I would also dare to say that links are even more important now.


In fact, without the authority a good link profile grants to a site, a web
page will have even more difficulty ranking now (see what I wrote just above
about the fate of low-authority pages).


What is even more important now is the context in which the link is present.
We already learned this with Penguin, but Hummingbird reaffirms how inbound
links from topically irrelevant contexts are bad links.


That said, Google still has to improve on the link front, as Danny Sullivan
said well in this tweet:




Links are the fossil fuel of search relevancy signals. Polluted. Not getting
better. And yet, that's what Google Hummingbird drinks most.
â Danny Sullivan (@dannysullivan) October 18, 2013


At the same time, though (again because of context and entity recognition),
brand co-occurrences and co-citations assume an even more important role with
Hummingbird.

Is Hummingbird related to 100% (not provided)?

The fact that Hummingbird and 100% (not provided) were rolled out at almost
the same time seems to be more than just a coincidence.


If Hummingbird is more about search entities, better information retrieval,
and query expansionâan update where keywords by themselves have lost part
of the omnipresent value they hadâthen relying on keyword data alone is
not enough anymore.


We should stop focusing only on keyword optimization and start thinking about
topical optimization.


This obliges us to think about great content, and not just about "content."
Things like "SEO copywriting" will end up being the same as "amazing
copywriting."


For that, as SEOs, we should start understanding how search entities work, and
not simply become human thesauruses of synonyms.


If Hummingbird is a giant step toward Semantic SEO, then as SEOs, our job "is
not about optimizing for strings, or for things, but for the connections between
things," as brilliantly says Aaron Bradley in this post and deck for SMX East.






Semantic SEO - The Shift From Strings To Things by Aaron Bradley #SMX from
Search Marketing Expo - SMX

What must we do to be Hummingbird-friendly?

Let me ask you few questions, and try to answer them sincerely:

When creating/optimizing a site, are you doing it with a clear audience in your
mind?
When performing on-page optimization for your site, are you following at least
these SEO best practices?
Using a clear and not overly complex information architecture;
Avoiding canonicalization issues;
Avoiding thin-content issues;
Creating a semantic content model;
Topically optimizing the content of the site on a page-by-page basis, using
natural and semantically rich language and with a landing page-centric strategy
in mind;
Creating useful content using several formats, that you yourself would like to
share with your friends and link to;
Implementing Schema.org, Open Graph and semantic mark-ups.

Are your link-building objectives:
Better brand visibility?
Gaining referral traffic?
Enhancing the sense of thought leadership of your brand?
Topically related sites and/or topically related sections of a more generalist
site (i.e.: News site)?

As an SEO, is social media offering these advantages?
Wider brand visibility;
Social echo;
Increased mentions/links in the form of derivatives, co-occurrences, and
co-citation in others' web sites;
Organic traffic and brand ambassadors' growth.


If you answered yes to all these questions, you don't have to do anything but
keep up the good work, refine it, and be creative and engaging. You were likely
already seeing your site ranking well and gaining traffic thanks to the more
holistic vision of SEO you have.

If you answered no to few of them, then you have just to correct the things
you're doing wrong and follow the so-called SEO best practices (and the 2013 Moz
Ranking Factors are a good list of best practices).


If you sincerely answered no to many of them, then you were having problems
even before Hummingbird was unleashed, and things won't get better with it if
you don't radically change your mindset.


Hummingbird is not asking us to rethink SEO or to reinvent the wheel. It is
simply asking us to not do crappy SEO... but that is something we should know
already, shouldn't we?
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten
hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think
of it as your exclusive digest of stuff you don't have time to hunt down but
want to read!



You may view the latest post at
http://feedproxy.google.com/~r/seomoz/~3/whRRlu4630k/hummingbird-unleashed

You received this e-mail because you asked to be notified when new updates are
posted.
Best regards,
Build Great Backlinks
peter.clarke@designed-for-success.com

Wednesday 23 October 2013

[Build Great Backlinks] TITLE

Build Great Backlinks has posted a new item, 'Take the 2013 Moz Industry Survey:
Share Your Voice!'

Posted by Cyrus-Shepard
We're very excited to announce the 2013 Moz Industry Survey is ready to take.
This is the fourth edition of the survey, which started in 2008 as the SEO
Industry Survey and only ran every two years. So much has changed since the last
survey that we thought it was important to run it anew in order to gain fresh
insights. Some of what we hope to learn and share:

Who works in inbound marketing and SEO today?
What tactics and tools are most popular?
Where are marketers spending their dollars?
What does the future of the industry hold?

This year's survey was redesigned to be easier and only take 5-10 minutes.
When the results are in we'll share the data freely with you, our partners, and
the rest of the world.




Prizes

It wouldn't be the Industry Survey without a few excellent prizes thrown in as
an added incentive.


This year we've upped the game with prizes we feel are both exciting and
perfect for the busy inbound marketer. To see the full sweepstakes terms and
rules, go to our sweepstakes rules page. The winners will be announced by June
4th. Follow us on Twitter to stay up to date.

Grand Prize: Attend MozCon 2014 in Seattle
+ Flight
+ Hotel
+ Lunch with an industry expert



Come see us Mozzers in Seattle! The Grand Prize includes one ticket to MozCon
2014 plus airfare and accommodations. We'll also arrange a one-on-one lunch for
you with an industry expert.

2 First Prizes: iPad 2

We're giving away two separate iPad 2s.



10 Second Prizes: $100 Amazon.com gift cards

Yep, 10 lucky people will win $100 Amazon.com gift cards. Why not buy yourself
a nice book?



Why the survey is important

By comparing answers and predictions from one year to the next, we can spot
trends and gain insight not easily reported through any other source. This is
our best chance to understand exactly where the future of our industry is
headed. Some of the things we hope to learn:

Demographics: Who is practicing inbound marketing and SEO today? Where do we
work and live?
Agencies vs. in-house vs. other: How are agencies growing? What's the average
size? Who is doing inbound marketing on their own?
Tactics and strategies: What's working for people today? How have strategies
and tactics evolved?
Tools and technology: What are marketers using to discover opportunities,
promote themselves, and measure the results?
Budget and spending: What tools and platforms are marketers investing in?

Every year the Industry Survey delivers new insights and surprises. For
example, the chart below (from the 2012 survey) lists average reported salary by
role. Will it change in 2013?






2012 SEO Industry Survey



Thanks to our partners

Huge thanks to our partners who are helping to spread the word and encouraging
their audience to participate in the survey. We'd especially like to give
special recognition to Search Engine Land, Buffer, aimClear, SEOverflow,
CopyBlogger, Econsultancy, Content Marketing Institute, TopRank Marketing,
MarketingProfs, HootSuite and Entreprenuer.com.





Sharing is caring

The number of people who take the survey is very important! The more people
who take the survey, the better and more accurate the data will be, and the more
insight we can share with the industry.


So please share with your co-workers. Share on social media. Share with your
email lists. You can use the buttons below this post to get you started, but
remember to take the survey first!





Enjoy legal talk as much as we do? Check out the official sweepstakes rules.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten
hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think
of it as your exclusive digest of stuff you don't have time to hunt down but
want to read!



You may view the latest post at
http://feedproxy.google.com/~r/seomoz/~3/gIGyPxMeE2s/take-2013-industry-survey

You received this e-mail because you asked to be notified when new updates are
posted.
Best regards,
Build Great Backlinks
peter.clarke@designed-for-success.com