The article ‘Key Tips for Attracting Visitors to Your Site’ is helpful to all those people, who owns a website and advertise their products and services through Internet, the most effective medium of publicity. After reading this article, you will be able to attract more people to your website through internet search engines. And you know more visitor means more popularity of the website, more selling of the product and more collection of revenue.
Canadian retailers saw a jump in Black Friday and Cyber Monday spending this year, while spending on foreign credit cards saw a significant increase, according to Moneris.
Since 2011 Moneris have been tracking spend on Black Friday and Cyber Monday in Canada. Sure, we have Boxing Day (December 26) here in the Great White North, but a growing amount of Canadians have been seizing the day-I mean deals on Black Friday and Cyber Monday. Since 2011 Moneris have tracked Canadian spending, and reported on the consecutive growth seen in our home country year-over-year. 2015 was no different. Canadians were out in droves shopping for clothes, household items, and sporting goods. So how well did the days perform in 2015? A fair share of consumers shopped on both days, and Moneris reports that Black Friday grew 9.6% and Cyber Monday grew 14.1% this year in Canada.
Running a business in any industry has its challenges when it comes to marketing. If you own an accountancy business, you may be wondering about the best ways to advertise it. Many of the things you should do to market your services are no different from what other business owners should do. But if you’re working solo, all of the marketing of your business is down to you. Your brand and you as an individual are closely linked, so it’s important to get out there and show your professionalism. Try these tactics to successfully advertise your advertising services.
Marketing is an incredibly stressful industry right now. It’s understandable – after all, there is so much competition to fight against. It’s no surprise that, as an industry, it isn’t doing too well in the burnout stakes. A recent article in Marketing Week laid out some stark figures. 71% of marketers felt they were burnt out while 66% expected their stress levels at work would increase in the near future.
Who are the top Digital Marketing Gurus that are putting out the best content and changing the industry?
It’s tough to discern the mediocre from the best, but there are a handful who stand out.
I’m sure I’ve left some top-notch players off the list, but in my opinion these 25 guys have dominated the digital marketing world and have helped me progress the most.
If your life depends on technology, you better read this cover to cover. Kleiner Perkins partner Mary Meeker’s yearly Internet Trends report is the ultimate compilation of essential tech statistics. Encompassing everything from Snapchat to drones, smartphone penetration to on-demand food. Today she dropped the 2015 report.
2015 Internet Trends Report
Mary Meeker’s latest Internet Trends report 2015 has some very interesting data points.
THE MARY MEEKER INTERNET TRENDS REPORT 2015
Almost 3 Billion people – i.e 40 percent of the world’s population – are now using the internet, which means there is a huge opportunity for brands and businesses to market to new customers, cultivate relationships and generate extra revenue through internet marketing.
We can see digital channel is growing in importance, so how can you compete effectively in 2015? Take a look at this infographic, produced by, that illustrates what works best in digital marketing today and recommends three marketing pillars to help you plan, manage and optimize digital channels.
THE STATE OF DIGITAL MARKETING IN 2015
What works best in Digital Marketing today?
Whenever you create a new website or blog for your business, the first thing you probably want to happen is have people find it. And, of course, one of the ways you hope they will find it is through search. But typically, you have to wait around for the Googlebot to crawl your website and add it (or your newest content) to the Google index.
So the question is: how do you ensure this happens as quickly as possible? Here are the basics of how website content is crawled and indexed, plus some great ways to get the Googlebot to your website or blog to index your content sooner rather than later.
What is Googlebot, Crawling, and Indexing?
Before we get started on some good tips to attract the Googlebot to your site, let’s start with what the Googlebot is, plus the difference between indexing and crawling.
- The Googlebot is simply the search bot software that Google sends out to collect information about documents on the web to add to Google’s searchable index.
- Crawling is the process where the Googlebot goes around from website to website, finding new and updated information to report back to Google. The Googlebot finds what to crawl using links.
- Indexing is the processing of the information gathered by the Googlebot from its crawling activities. Once documents are processed, they are added to Google’s searchable index if they are determined to be quality content. During indexing, the Googlebot processes the words on a page and where those words are located. Information such as title tags and ALT attributes are also analyzed during indexing.
So how does the Googlebot find new content on the web such as new websites, blogs, pages, etc.? It starts with web pages captured during previous crawl processes and adds in sitemap data provided by webmasters. As it browses web pages previously crawled, it will detect links upon those pages to add to the list of pages to be crawled. If you want more details, you can read about them in Webmaster Tools Help.
Hence, new content on the web is discovered through sitemaps and links. Now we’ll take a look at how to get sitemaps on your website and links to it that will help the Googlebot discover new websites, blogs, and content.
How to Get Your New Website or Blog Discovered
So how can you get your new website discovered by the Googlebot? Here are some great ways. The best part is that some of the following will help you get referral traffic to your new website too!
- Create a Sitemap – A sitemap is an XML document on your website’s server that basically lists each page on your website. It tells search engines when new pages have been added and how often to check back for changes on specific pages. For example, you might want a search engine to come back and check your homepage daily for new products, news items, and other new content. If your website is built on WordPress, you can install the Google XML Sitemaps plugin and have it automatically create and update your sitemap for you as well as submit it to search engines. You can also use tools such as the XML Sitemaps Generator.
- Submit Sitemap to Google Webmaster Tools – The first place you should take your sitemap for a new website is Google Webmaster Tools. If you don’t already have one, simply create a free Google Account, then sign up for Webmaster Tools. Add your new site to Webmaster Tools, then go to Optimization > Sitemaps and add the link to your website’s sitemap to Webmaster Tools to notify Google about it and the pages you have already published. For extra credit, create an account with Bing and submit your sitemap to them via their Webmaster Tools.
- Install Google Analytics – You’ll want to do this for tracking purposes regardless, but it certainly might give Google the heads up that a new website is on the horizon.
- Submit Website URL to Search Engines – Some people suggest that you don’t do this simply because there are many other ways to get a search engine’s crawler to your website. But it only takes a moment, and it certainly doesn’t hurt things. So submit your website URL to Google by signing into your Google Account and going to the Submit URL option in Webmaster Tools. For extra credit, submit your site to Bing. You can use the anonymous tool to submit URL’s below the Webmaster Tools Sign In – this will also submit it to Yahoo.
- Create or Update Social Profiles – As mentioned previously, crawlers get to your site via links. One way to get some quick links is by creating social networking profiles for your new website or adding a link to your new website to pre-existing profiles. This includes Twitter profiles, Facebook pages, Google+ profiles or pages, LinkedIn profiles or company pages, Pinterest profiles, and YouTube channels.
- Share Your New Website Link – Once you have added your new website link to a new or pre-existing social profile, share it in a status update on those networks. While these links are nofollow, they will still alert search engines that are tracking social signals. For Pinterest, pin an image from the website and for YouTube, create a video introducing your new website and include a link to it in the video’s description.
- Bookmark It – Use quality social bookmarking sites like Delicious andStumbleUpon.
- Create Offsite Content – Again, to help in the link building process, get some more links to your new website by creating offsite content such as submitting guest posts to blogs in your niche, articles to quality article directories, and press releases to services that offer SEO optimization and distribution. Please note this is about quality content from quality sites – you don’t want spammy content from spammy sites because that just tells Google that your website is spammy.
How to Get Your New Blog Discovered
So what if your new website is a blog? Then in additional to all of the above options, you can also do the following to help get it found by Google.
- Setup Your RSS with Feedburner – Feedburner is Google’s own RSS management tool. Sign up or in to your Google account and submit your feed with Feedburner by copying your blog’s URL or RSS feed URL into the “Burn a feed” field. In addition to your sitemap, this will also notify Google of your new blog and each time that your blog is updated with a new post.
- Submit to Blog Directories – TopRank has a huge list of sites you can submit your RSS feed and blog to. This will help you build even more incoming links. If you aren’t ready to do them all, at least start with Technorati as it is one of the top blog directories. Once you have a good amount of content, also try Alltop.
Once your website or blog is indexed, you’ll start to see more traffic from Google search. Plus, getting your new content discovered will happen faster if you have set up sitemaps or have a RSS feed. The best way to ensure that your new content is discovered quickly is simply by sharing it on social media networks through status updates, especially on Google+.
Also remember that blog content is generally crawled and indexed much faster than regular pages on a static website, so consider having a blog that supports your website. For example, if you have a new product page, write a blog post about it and link to the product page in your blog post. This will help the product page get found much faster by the Googlebot!
What other techniques have you used to get a new website or blog indexed quickly? Please share in the comments!
A simple file that contains components used to specify the pages on a website that must not be crawled (or in some cases must be crawled) by search engine bots. This file should be placed in the root directory of your site. The standard for this file was developed in 1994 and is known as the Robots Exclusion Standard or Robots Exclusion Protocol.
Some common misconceptions about robots.txt:
- It stops content from being indexed and shown in search results.
If you list a certain page or file under a robots.txt file but the URL to the page is found in external resources, search engine bots may still crawl and index this external URL and show the page in search results. Also, not all robots follow the instructions given in robots.txt files, so some bots may crawl and index pages mentioned under a robots.txt file anyway. If you want an extra indexing block, a robots Meta tag with a ‘noindex’ value in the content attribute will serve as such when used on these specific web pages, as shown below:
Read more about this here.
- It protects private content.
If you have private or confidential content on a site that you would like to block from the bots, please do not only depend on robots.txt. It is advisable to use password protection for such files, or not to publish them online at all.
- It guarantees no duplicate content indexing.
As robots.txt does not guarantee that a page will not be indexed, it is unsafe to use it to block duplicate content on your site. If you do use robots.txt to block duplicate content make sure you also adopt other foolproof methods, such as a rel=canonical tag.
- It guarantees the blocking of all robots.
Unlike Google bots, not all bots are legitimate and thus may not follow the robots.txt file instructions to block a particular file from being indexed. The only way to block these unwanted or malicious bots is by blocking their access to your web server through server configuration or with a network firewall, assuming the bot operates from a single IP address.
Uses for Robots.txt:
In some cases the use of robots.txt may seem ineffective, as pointed out in the above section. This file is there for a reason, however, and that is its importance for on-page SEO.
The following are some of the practical ways to use robots.txt:
- To discourage crawlers from visiting private folders.
- To keep the robots from crawling less noteworthy content on a website. This gives them more time to crawl the important content that is intended to be shown in search results.
- To allow only specific bots access to crawl your site. This saves bandwidth.
- Search bots request robots.txt files by default. If they do not find one they will report a 404 error, which you will find in the log files. To avoid this you must at least use a default robots.txt, i.e. a blank robots.txt file.
- To provide bots with the location of your Sitemap. To do this, enter a directive in your robots.txt that includes the location of your Sitemap:
You can add this anywhere in the robots.txt file because the directive is independent of the user-agent line. All you have to do is specify the location of your Sitemap in the sitemap-location.xml part of the URL. If you have multiple Sitemaps you can also specify the location of your Sitemap index file. Learn more about sitemaps in our blog on XML Sitemaps.
Examples of Robots.txt Files:
There are two major elements in a robots.txt file: User-agent and Disallow.
User-agent: The user-agent is most often represented with a wildcard (*) which is an asterisk sign that signifies that the blocking instructions are for all bots. If you want certain bots to be blocked or allowed on certain pages, you can specify the bot name under the user-agent directive.
Disallow: When disallow has nothing specified it means that the bots can crawl all the pages on a site. To block a certain page you must use only one URL prefix per disallow. You cannot include multiple folders or URL prefixes under the disallow element in robots.txt.
The following are some common uses of robots.txt files.
To allow all bots to access the whole site (the default robots.txt) the following is used:
To block the entire server from the bots, this robots.txt is used:
To allow a single robot and disallow other robots:
To block the site from a single robot:
To block some parts of the site:
Use this robots.txt to block all content of a specific file type. In this example we are excluding all files that are Powerpoint files. (NOTE: The dollar ($) sign indicates the end of the line):
To block bots from a specific file:
To crawl certain HTML documents in a directory that is blocked from bots you can use an Allow directive. Some major crawlers support the Allow directive in robots.txt. An example is shown below:
To block URLs containing specific query strings that may result in duplicate content, the robots.txt below is used. In this case, any URL containing a question mark (?) is blocked:
For the page not to be indexed:Sometimes a page will get indexed even if you include in the robots.txt file due to reasons such as being linked externally. In order to completely block that page from being shown in search results, you can include robots noindex Meta tags on those pages individually. You can also include a nofollow tag and instruct the bots not to follow the outbound links by inserting the following codes:
For the page not to be indexed and links not to be followed:
NOTE: If you add these pages to the robots.txt and also add the above Meta tag to the page, it will not be crawled but the pages may appear in the URL-only listings of search results, as the bots were blocked specifically from reading the Meta tags within the page.
Another important thing to note is that you must not include any URL that is blocked in your robots.txt file in your XML sitemap. This can happen, especially when you use separate tools to generate the robots.txt file and XML sitemap. In such cases, you might have to manually check to see if these blocked URLs are included in the sitemap. You can test this in your Google Webmaster Tools account if you have your site submitted and verified on the tool and have submitted your sitemap.
Go to Webmaster Tools > Optimization > Sitemaps and if the tool shows any crawl error on the sitemap(s) submitted, you can double check to see whether it is a page included in robots.txt.