Category : SEO News

Google News SEO News

Google’s Search Quality Rater’s Guideline Infographic

To the surprise and delight of many SEOs, Google recently released the full edition of its Search Quality Raters Guidelines – 160 pages of detailed instructions used to guide the thousands of humans that are paid to manually evaluate the quality of the search results returned by Google.

Although it doesn’t reveal the “secret sauce” to ranking number one on the Search Engine Results Pages (SERPs), the guidelines do provide valuable insight into what Google values, or finds undesirable, when evaluating a website or web pages.

It should be made clear that the human Search Quality Raters do not directly affect your search rankings; they are not actively voting websites up or down the SERPs – wouldn’t that be interesting? Instead, they are providing feedback to Google engineers on the accuracy and validity of the search algorithm and, in that way, are indirectly impacting future updates to the search algorithm.

Read More
SEO News

Smartphones Impact on the World [Infographic]

Given the points made last week I thought I’d share some smartphone facts and attach and infographic How Smartphones Impact on the World below.

A few key points:

  1. 25% of U.S. Households have “cut the cord” and no longer use wireless telephone service.  My family only use ours for alarm monitoring these days.
  2. 80% of the world’s population now has a phone and they are increasingly smartphones. Every year the percentage keeps on increasing.
  3. Smartphones will be the primary PC for a great percentage of the developing world for some time.

The Bottom Line: Having a website that is mobile friendly is a bigger deal that ever. Increasingly it is the primary form of communication for many people in the developed world and in the developing world it IS the ONLY form of communication for most people.

smartphone-impact-on-world

Read More
SEO News

The Blueprint for a Perfectly Testable Landing Page

Landing pages are composed of a group of definable elements. The building blocks presented below can be used as a guide when defining and creating a perfect landing page of your own!

Creating a landing page but don’t know where to start or include?

The infographic below will help you understand the 6 different elements of a landing page including: The Main Headline, The Hero Shot, Data Collection, CTA, Benefits and the “Safety Net” CTA.

Test different information, Call-to-Action layouts, along with color to achieve the best landing page for your product or service.

Click on the infographic below to view a larger image:

blueprint-landing-page-sm

Read More
Google News Google Products Google Webmaster Tools SEO News SEO Tools

Google Officially Launches The New Search Analytics Report In Webmaster Tools

After months of testing, Google has released the new Search Analytics report within Google Webmaster Tools – take it for a deep dive.

If you manage a website, you need a deep understanding of how users find your site and how your content appears on Google’s search results. Until now, this data was shown in the Search Queries report, probably the most used feature in Webmaster Tools. Over the years, we’ve been listening to your feedback and features requests. How many of you wished they could compare traffic on desktop and mobile? How many of you needed to compare metrics in different countries? or in two different time frames?

We’ve heard you! Today, we’re very happy to announce Search Analytics, the new report in Google Webmaster Tools that will allow you to make the most out of your traffic analysis.
The new Search Analytics report enables you to break down your site’s search data and filter it in many different ways in order to analyze it more precisely. For instance, you can now compare your mobile traffic before and after the April 21st Mobile update, to see how it affected your traffic.

search-analytics-webmaster-tools

You can now access the tool within Google Webmaster Tools over here. There are some differences between Search Analytics and Search Queries. Data in the Search Analytics report is much more accurate than data in the older Search Queries report, and it is calculated differently. To learn more read out Search Analytics Help Center article’s section about data. Because we understand that some of you will still need to use the old report, we’ve decided to leave it available in Google Webmaster Tools for three additional months.

Read More
SEO News

SEO Training 101 [Part 2]

In this article I want to cover a couple of best practices that anyone developing a website should take into account when they want to optimize their web pages for search engines. The act of optimizing one’s webpage is to ensure that there is a very high probability that it ranks on the first page when certain targeted keywords are entered into the search bar.

So, let’s now look at some key elements of web page design that should be taken into account:

Many of us make the mistake of assuming that humans and search engines can interpret web pages the same way. In reality, that is not the case. There are things that humans can read and interpret that search cannot. Search engines are currently designed to read HTML text formats, so if you have a website that has loads on images with text embedded in those pictures, you can rest assured that the search engine spiders will not be able to read them. So, with that in mind you should do the following to your visually stunning web pages to ensure that they are also readable by the search engine bots:

(a.) When you use images in your web pages you should assign “alt text” to each of your images which contains a description of what the images are so that the crawlers can read them and know what the images are.

(b.) Java plug-ins should also have text on the page that describes what they are.

(c.) Flash plug-ins should also have text assigned to them as descriptors stating what they are.

(d.) If you plan on using embedded audio and video media in your webpages you should also append text descriptors in the form of a transcript that gives a written record of what is being said in the audios as well as what is happening in the videos.

SEO-Training-101-Basics-Beginners

There are also lots of tools on the Internet (which are free) that you can use to confirm what search engines can read. It is a good best practice to use these tools after you have built your webpages and optimized them for SEO. Some great examples of such tools are Google’s Cache and WebConfs.com to name a few. These tools are great for webpages that have lots of images and links because it will be able to highlight whether the search engines know what the images, audio and video are portraying as well as whether the links that you have also included in your webpages are readable as well.

Another question that you have to ask yourself, especially if you own a website with many pages of content, is whether the search engine spiders and crawlers can find all of your pages so that they can be indexed. A lot of us make the mistake of creating compelling content, but fail to link to those webpages from our home page or from another webpage within our website. When that happens we end up having clusters of orphan webpages that have great content but cannot be tracked and indexed by our search engines. The end result is that your target audience never gets to see your content! To fix this issue ensure that you build crawlable link structures within your website by ensuring that there are links to all of you webpages within your website. On good practice in a blog roll for example is to have a link from your home page to your first blog post and then have links between every new blog post that you create so that when a crawler finds your home page it can simply traverse all of your blog posts and index them within the huge online database.

Read More
Google News Google Products Search Engine Optimization SEO News Social Media News

It’s Official: Google Says More Searches Now On Mobile Than On Desktop

Company officially confirms what many have been anticipating for years.

It was probably inevitable, but now it’s official: Google searches via mobile devices are now more common than searches on desktop or laptop computers. There’s been talk that mobile search queries would overtake desktop queries sometime in 2015, and now it’s happened, according to Google.

The milestone was marked on Google’s blog on Tuesday, although it was buried in a post introducing a handful of new Google products.

“Billions of times per day, consumers turn to Google for I want-to-know, I want-to-go, I want-to-do, and I want-to-buy moments,” wrote Jerry Dischler, vice president, Product Management for AdWords, Google’s online ad service. Google processes more than 100 billion search requests worldwide each month, including those made on PCs, according to the company.

larger-15-Google-Search-mobile-iOS-1

Google now gets more search queries in the U.S. from people using mobile devices such as smartphones than it does from people browsing the Web on PCs.

The company announced the change at a digital advertising conference on Tuesday, according to media reports. Google executive Jerry Dischler said that the shift to majority mobile searches has occurred in 10 key markets, including the U.S. and Japan.

He wouldn’t identify the other countries and did not say when the shift occurred, according to the Wall Street Journal.

The shift to mobile searches marks an important milestone for Google and could prove critical to the Internet giant’s future business.

The ads that Google displays alongside its search results are considered some of the most effective and lucrative types of ads in the online marketing world. But Google’s average ad prices have been in decline for several years.

Many analysts have attributed Google’s ad price decline to the fact that consumers increasingly access its service from mobile phones, and that mobile ads simply don’t command the same rates as the traditional search ads that Google serves on PCs.

Read More
Search Engine Optimization SEO News

SEO Training 101 [Part – 1]

The Internet has been around, to the general public at least – for at least the last 20 years and it has literally grown exponentially – and it still is – with hundreds of thousands of web servers humming away in closets, garages, massive server farms and Aunt Betty’s basement speckled all over the planet. In a single daily there are billions of individuals who use the Internet to find information and consume data in multimedia format (videos, voice, gaming, social media).

The Internet has woven itself into our lives to the point of which there is a form of symbiosis between man and machine such that if we were to wake up tomorrow to learn that the Internet is no more we will feel a huge void and we could possibly feel like a chain smoker who has decided to go cold turkey and quit smoking from the get go!

seo-training-basics

The Internet has changed the landscape of our economy. Countless transaction occur online where people buys products and services with the press of a button from their smartphones, tablets and computers. These “online stores” are always open and can be accessed from anywhere in the world – once you have access to an Internet connection. However, how do people find what they are looking for? Mainly through search engines – and this is where SEO, or Search Engine Optimization comes into play. SEO is simply a process by which a webpage is put together with the ultimate goal of appearing on the first page of a search engine (like Google for example) and as high as possible on that first page when a target keyword is entered into the search engine search bar. A simple example would be if you had created a website on baby toys for infants and the target keyword that you would want your website to rank on the first page of Google for could be “toys for infants”.

Each major search engine provider has a mathematical algorithm defined that is used to determine how webpages are ranked. No one knows what that algorithm is (for good reason) and these algorithms are updated frequently with the main purpose of eliminating websites with crappy content being given a high rank. You see, there are lots of folks out there who do not want to spend the time creating quality content. Instead these guys will look for all kinds of shortcuts to try to get highly ranked so that they can make a quick buck selling products to people. Thanks to the work of our search engine providers, however, this type of activity has been controlled and virtually eradicated and the fight will continue to ensure that people get the best, high quality content that they are looking for.

Now, let’s look at how search engines know what is out there. The Internet is comprised of billions upon billions of webpages and search engines leverage things called “Spiders” and “Crawlers” that spend all day and night traversing all of these pages to find newly created webpages. When a new webpage is found, these spiders read the content and then index it within these massive databases that are located all over the planet. When this content is indexed the search engine would assign a set of keywords to that content that pretty much tell it what type of information is available on that page. That is why it is important to have keywords that closely relate to the content that you are creating so that it is properly indexed in the search engine database.

I love analogies so I will use one here. Spiders and crawlers can be analogous to subway trains that stop at every station (the station would be a webpage) within a complex underground subway network. Whenever a train makes a stop at a station, information about that station is recorded and indexed in a search engine database.

Now, the reason that there are search engine databases located all over the world in super fast and high end computers is because of the fact that we are living in an instant-gratification world. In other words, we are used to the fact that we can enter a search term in the search bar of Google and get results in less than a second. If the results take more than 2 to 3 seconds to come up on the screen we get frustrated! I must say that we have come a long way from taking the bus to the public library and then taking 20 minutes to find the perfect book on a subject that we are researching!

SEO-Roadmap-Wheel

Before I end this article for today I just wanted to cover how search engines rank webpages for a particular keyword seeing that there are instances where there are hundreds and even thousands of webpages that usually compete for one of the first ten slots of a search results pages based on a particular keyword. So how is the ranking done? There are two factors in play here: relevance and popularity. The main objective or search engines is to give the searcher the most relevant content that is linked to the keyword that they type in so that they do not have to conduct another search, so for example if I type in “Green 2015 Ford Mustangs” I want to see the most relevant websites that give me information on green 2015 Ford Mustangs (e.g. Green Ford Mustang specifications, videos, cars for sale etc.). When we look at popularity the search engine measures how popular the website is that hosts the content as well as the webpage itself – are viewers staying on the webpage long enough to read the content? Are people coming back to the site to consume more content? Are they many backlinks leading traffic (real people) to that sight? and so forth…

This concludes the end of seo training part 1 and I hope that you derived some sort of value from it. At the end of the day, if you are looking at creating a website and you want to get a very good rank with the likes of Google just focus on being yourself, write very good and original content (no copying from other sites) and ensure that the content is highly relevant to your target keywords so if your target keywords are baby clothes then your content had better be about baby clothes and not baby toys for example. Lastly in addition to creating great content you have to ensure that you syndicate it and let the world know that you fabulous website exists!

Read More
Inbound Marketing Internet Marketing Search Engine Optimization SEO News

How to Get Googlebot to Index Your New Website & Blog Quickly

Whenever you create a new website or blog for your business, the first thing you probably want to happen is have people find it. And, of course, one of the ways you hope they will find it is through search. But typically, you have to wait around for the Googlebot to crawl your website and add it (or your newest content) to the Google index.

So the question is: how do you ensure this happens as quickly as possible? Here are the basics of how website content is crawled and indexed, plus some great ways to get the Googlebot to your website or blog to index your content sooner rather than later.

What is Googlebot, Crawling, and Indexing?

googlebot-google-crawler-indexing

Before we get started on some good tips to attract the Googlebot to your site, let’s start with what the Googlebot is, plus the difference between indexing and crawling.

  • The Googlebot is simply the search bot software that Google sends out to collect information about documents on the web to add to Google’s searchable index.
  • Crawling is the process where the Googlebot goes around from website to website, finding new and updated information to report back to Google. The Googlebot finds what to crawl using links.
  • Indexing is the processing of the information gathered by the Googlebot from its crawling activities. Once documents are processed, they are added to Google’s searchable index if they are determined to be quality content. During indexing, the Googlebot processes the words on a page and where those words are located. Information such as title tags and ALT attributes are also analyzed during indexing.

So how does the Googlebot find new content on the web such as new websites, blogs, pages, etc.? It starts with web pages captured during previous crawl processes and adds in sitemap data provided by webmasters. As it browses web pages previously crawled, it will detect links upon those pages to add to the list of pages to be crawled. If you want more details, you can read about them in Webmaster Tools Help.

Hence, new content on the web is discovered through sitemaps and links. Now we’ll take a look at how to get sitemaps on your website and links to it that will help the Googlebot discover new websites, blogs, and content.

How to Get Your New Website or Blog Discovered

So how can you get your new website discovered by the Googlebot? Here are some great ways. The best part is that some of the following will help you get referral traffic to your new website too!

  • Create a Sitemap – A sitemap is an XML document on your website’s server that basically lists each page on your website. It tells search engines when new pages have been added and how often to check back for changes on specific pages. For example, you might want a search engine to come back and check your homepage daily for new products, news items, and other new content. If your website is built on WordPress, you can install the Google XML Sitemaps plugin and have it automatically create and update your sitemap for you as well as submit it to search engines. You can also use tools such as the XML Sitemaps Generator.
  • Submit Sitemap to Google Webmaster Tools – The first place you should take your sitemap for a new website is Google Webmaster Tools. If you don’t already have one, simply create a free Google Account, then sign up for Webmaster Tools. Add your new site to Webmaster Tools, then go to Optimization > Sitemaps and add the link to your website’s sitemap to Webmaster Tools to notify Google about it and the pages you have already published. For extra credit, create an account with Bing and submit your sitemap to them via their Webmaster Tools.
  • Install Google Analytics – You’ll want to do this for tracking purposes regardless, but it certainly might give Google the heads up that a new website is on the horizon.
  • Submit Website URL to Search Engines – Some people suggest that you don’t do this simply because there are many other ways to get a search engine’s crawler to your website. But it only takes a moment, and it certainly doesn’t hurt things. So submit your website URL to Google by signing into your Google Account and going to the Submit URL option in Webmaster Tools. For extra credit, submit your site to Bing. You can use the anonymous tool to submit URL’s below the Webmaster Tools Sign In – this will also submit it to Yahoo.
  • Create or Update Social Profiles – As mentioned previously, crawlers get to your site via links. One way to get some quick links is by creating social networking profiles for your new website or adding a link to your new website to pre-existing profiles. This includes Twitter profiles, Facebook pages, Google+ profiles or pages, LinkedIn profiles or company pages, Pinterest profiles, and YouTube channels.
  • Share Your New Website Link – Once you have added your new website link to a new or pre-existing social profile, share it in a status update on those networks. While these links are nofollow, they will still alert search engines that are tracking social signals. For Pinterest, pin an image from the website and for YouTube, create a video introducing your new website and include a link to it in the video’s description.
  • Bookmark It – Use quality social bookmarking sites like Delicious andStumbleUpon.
  • Create Offsite Content – Again, to help in the link building process, get some more links to your new website by creating offsite content such as submitting guest posts to blogs in your niche, articles to quality article directories, and press releases to services that offer SEO optimization and distribution. Please note this is about quality content from quality sites – you don’t want spammy content from spammy sites because that just tells Google that your website is spammy.

How to Get Your New Blog Discovered

So what if your new website is a blog? Then in additional to all of the above options, you can also do the following to help get it found by Google.

  • Setup Your RSS with FeedburnerFeedburner is Google’s own RSS management tool. Sign up or in to your Google account and submit your feed with Feedburner by copying your blog’s URL or RSS feed URL into the “Burn a feed” field. In addition to your sitemap, this will also notify Google of your new blog and each time that your blog is updated with a new post.
  • Submit to Blog Directories – TopRank has a huge list of sites you can submit your RSS feed and blog to. This will help you build even more incoming links. If you aren’t ready to do them all, at least start with Technorati as it is one of the top blog directories. Once you have a good amount of content, also try Alltop.

The Results

Once your website or blog is indexed, you’ll start to see more traffic from Google search. Plus, getting your new content discovered will happen faster if you have set up sitemaps or have a RSS feed. The best way to ensure that your new content is discovered quickly is simply by sharing it on social media networks through status updates, especially on Google+.

Also remember that blog content is generally crawled and indexed much faster than regular pages on a static website, so consider having a blog that supports your website. For example, if you have a new product page, write a blog post about it and link to the product page in your blog post. This will help the product page get found much faster by the Googlebot!

What other techniques have you used to get a new website or blog indexed quickly? Please share in the comments!
Read More
Google News Inbound Marketing Internet Marketing Search Engine Optimization SEO News

Robots.Txt: A Beginners Guide

Robots.txt is:

A simple file that contains components used to specify the pages on a website that must not be crawled (or in some cases must be crawled) by search engine bots. This file should be placed in the root directory of your site. The standard for this file was developed in 1994 and is known as the Robots Exclusion Standard or Robots Exclusion Protocol.

Some common misconceptions about robots.txt:

  • It stops content from being indexed and shown in search results.

If you list a certain page or file under a robots.txt file but the URL to the page is found in external resources, search engine bots may still crawl and index this external URL and show the page in search results. Also, not all robots follow the instructions given in robots.txt files, so some bots may crawl and index pages mentioned under a robots.txt file anyway.  If you want an extra indexing block, a robots Meta tag with a ‘noindex’ value in the content attribute will serve as such when used on these specific web pages, as shown below:

<meta name=“robots” content=“noindex”>

Read more about this here.

  • It protects private content.

If you have private or confidential content on a site that you would like to block from the bots, please do not only depend on robots.txt. It is advisable to use password protection for such files, or not to publish them online at all.

  • It guarantees no duplicate content indexing.

As robots.txt does not guarantee that a page will not be indexed, it is unsafe to use it to block duplicate content on your site. If you do use robots.txt to block duplicate content make sure you also adopt other foolproof methods, such as a rel=canonical tag.

  • It guarantees the blocking of all robots.

Unlike Google bots, not all bots are legitimate and thus may not follow the robots.txt file instructions to block a particular file from being indexed. The only way to block these unwanted or malicious bots is by blocking their access to your web server through server configuration or with a network firewall, assuming the bot operates from a single IP address.

robots-txt-beginners-guide

Uses for Robots.txt:

In some cases the use of robots.txt may seem ineffective, as pointed out in the above section. This file is there for a reason, however, and that is its importance for on-page SEO.

The following are some of the practical ways to use robots.txt:

  • To discourage crawlers from visiting private folders.
  • To keep the robots from crawling less noteworthy content on a website. This gives them more time to crawl the important content that is intended to be shown in search results.
  • To allow only specific bots access to crawl your site. This saves bandwidth.
  • Search bots request robots.txt files by default. If they do not find one they will report a 404 error, which you will find in the log files. To avoid this you must at least use a default robots.txt, i.e. a blank robots.txt file.
  • To provide bots with the location of your Sitemap.  To do this, enter a directive in your robots.txt that includes the location of your Sitemap:
      Sitemap: http://yoursite.com/sitemap-location.xml

You can add this anywhere in the robots.txt file because the directive is independent of the user-agent line.  All you have to do is specify the location of your Sitemap in the sitemap-location.xml part of the URL. If you have multiple Sitemaps you can also specify the location of your Sitemap index file.  Learn more about sitemaps in our blog on XML Sitemaps.

Examples of Robots.txt Files:

There are two major elements in a robots.txt file: User-agent and Disallow.

User-agent: The user-agent is most often represented with a wildcard (*) which is an asterisk sign that signifies that the blocking instructions are for all bots. If you want certain bots to be blocked or allowed on certain pages, you can specify the bot name under the user-agent directive.

Disallow: When disallow has nothing specified it means that the bots can crawl all the pages on a site. To block a certain page you must use only one URL prefix per disallow. You cannot include multiple folders or URL prefixes under the disallow element in robots.txt.

The following are some common uses of robots.txt files.

To allow all bots to access the whole site (the default robots.txt) the following is used:

User-agent:*
Disallow:

To block the entire server from the bots, this robots.txt is used:

User-agent:*
Disallow: /

To allow a single robot and disallow other robots:

User-agent: Googlebot
Disallow:

User-agent: *

 Disallow: /

To block the site from a single robot:

User-agent: XYZbot
 Disallow: /

To block some parts of the site:

User-agent: *
 Disallow: /tmp/
 Disallow: /junk/

Use this robots.txt to block all content of a specific file type. In this example we are excluding all files that are Powerpoint files. (NOTE: The dollar ($) sign indicates the end of the line):

User-agent: *
 Disallow: *.ppt$

To block bots from a specific file:

User-agent: *
 Disallow: /directory/file.html

To crawl certain HTML documents in a directory that is blocked from bots you can use an Allow directive. Some major crawlers support the Allow directive in robots.txt. An example is shown below:

User-agent: *
 Disallow: /folder/
 Allow: /folder1/myfile.html

To block URLs containing specific query strings that may result in duplicate content, the robots.txt below is used. In this case, any URL containing a question mark (?) is blocked:

User-agent: *
 Disallow: /*?

For the page not to be indexed:Sometimes a page will get indexed even if you include in the robots.txt file due to reasons such as being linked externally. In order to completely block that page from being shown in search results, you can include robots noindex Meta tags on those pages individually. You can also include a nofollow tag and instruct the bots not to follow the outbound links by inserting the following codes:

<meta name=“robots” content=“noindex”>

For the page not to be indexed and links not to be followed:

<meta name=“robots” content=“noindex,nofollow”>

NOTE: If you add these pages to the robots.txt and also add the above Meta tag to the page, it will not be crawled but the pages may appear in the URL-only listings of search results, as the bots were blocked specifically from reading the Meta tags within the page.

Another important thing to note is that you must not include any URL that is blocked in your robots.txt file in your XML sitemap. This can happen, especially when you use separate tools to generate the robots.txt file and XML sitemap. In such cases, you might have to manually check to see if these blocked URLs are included in the sitemap. You can test this in your Google Webmaster Tools account if you have your site submitted and verified on the tool and have submitted your sitemap.

Go to Webmaster Tools > Optimization > Sitemaps and if the tool shows any crawl error on the sitemap(s) submitted, you can double check to see whether it is a page included in robots.txt.

Read More
SEO News

3 SEO Ideas You Should Forget Now

Seeing as SEO has been around since the late 90s, it stands to reason that quite a few things have changed since webmasters first began optimizing their websites for search. Unfortunately, there are still a lot of outmoded SEO tips and techniques floating around in cyberspace that refuse to die. If you still subscribe to these outdated SEO ideas, it’s time to get up to speed.

Ranking by back links alone: Search marketers have debunked this approach many times over. No matter what you were told, garnering a large number of back links alone won’t boost your place in the search engine rankings. Other factors go into your ranking like social data, query relevance, and the frequency with which you update your content.

Do this instead: Backlinking is important, but you should also focus on an online marketing strategy that will get people to follow those back links and ultimately become your fans.

Copying your competitors: Websites succeed and fail due to a number of factors. Every company website is unique, as is every company behind the website. So what works for, say, Yahoo or Amazon may or may not work for your website.

Do this instead: Set your own benchmarks and do your own research. Take an individualized approach to finding out what will work for your website’s SEO.

Using article submissions: Submitting articles to sites like eZinearticles became a way for websites to get quick back links. However, this technique has only ever had limited success when it comes to SEO, especially when people started publishing low-quality articles for the sake of earning links.

Do this instead: If you’re going to publish content for SEO purposes, it’s best to use your content to build relationships. Get in touch with bloggers who would be happy to have you as a guest blogger. You’ll be adding valuable content to their website, and also building relationships with their audience.

SEO changes all the time, so it’s important for website owners to stay current on SEO trends. If you choose to purchase SEO services, make sure your provider has ways of staying up to speed on SEO news and changes. That way, you can guard against you and your website falling behind in the search rankings.

Read More
1 2
ARE YOU READY? GET IT NOW!
Increase more than 500% of Email Subscribers!
Your Information will never be shared with any third party.