Featured Post

Avasoft Solutions: The Best Business Web Design for Your Company

Every company wants the best business website design possible. After all, a website can turn into a vital connecting point to hundreds or even thousands of potential clients. Web development has come a long way and staying on top of technology is part of what Avasoft Solutions does to offer clients the...

Read More

Avasoft Solution - Best Business WebSite Design and Web Hosting Company

Common Technical Problems with SEO and How to Fix Them

Posted by Avasoft Team | Posted in Website Technical Issues | Posted on 24-09-2012

Tags: , , , ,

0

SEO is about much more than just utilizing the right keywords on your site. There are also a number of technical issues that must be corrected if you want your site to rank in the search engines. Here are 10 of the most common technical problems you may run across in the world of SEO:

  1. Query parameters – Many eCommerce sites that are driven by a database have this problem, although it can pop up on any website. For example, if site users can search for a product by color, you could end up with a URL like www.example.com/product-category?color=12. If there is more than one query parameter included in the URL, then there is the issue of duplicate content coming up because there are multiple versions of the site that return to the same page. One has one query first, and the other has the other query first. Also Google will only crawl a certain amount of your site, depending on your PageRank. You could leave a lot of pages un-crawled if you don’t fix this problem. To do so, first determine which keywords you want to use. Then figure out which attributes users are searching for with those keywords. For example, they may be searching for a specific brand. Create a landing page with a URL that will return that brand’s page without using a query. Then make sure that the desired URL structures are included in your robots.txt file. However, if your site has been around for a while, Google has already indexed certain pages on your site, so this won’t fix the problem. You’ll have to use a rel=canonical tag to patch over the problem by covering up the URLs that don’t need to be indexed and redirecting the crawlers to the URLs you do want to be indexed.
  2. More than one homepage version – Although this is an issue more often affecting .NET sites, it can occur on others as well. URLs like www.example.com/default.aspx or www.example.com/index.html can be a big problem for the search engines. To correct this problem, run a crawl on your site, export that crawl to a CSV file and then filter according to the META column so that you can easily see all versions of your home page. Then use 301 redirects on all duplicates to send users to the right version.
  3. Lowercase / uppercase URLs – Sites with a .NET extension often have an issue with their URLs. The server tends to be configured to URLs that have uppercase letters rather than redirecting to the URL’s lowercase version. To fix this, use this URL rewrite tool which will fix this issue on IIS 7 servers.
  4. 302 redirects – This type of redirect is temporary, so search engines expect the page to come back at some point. On the other hand, a 301 redirect is permanent, so link equity passes to the new page. IIS SEO Toolkit or Screaming Frog are both great crawling programs that will enable you to look for 302 redirects. You can then change them to 301 redirects if necessary.
  5. Soft 404 – This issue returns to users a page telling them that the page they asked for can’t be found. However, there’s a 200 code being sent to the search engines. This code tells the search engines that the site is working properly, which results in the incorrect page being crawled and indexed. To locate the soft 404 pages on your site, use Google Webmaster Tools. Another helpful tool is Web Sniffer. Once you’ve located the 404 code sites that are returning a 200 code to the search engines, you can reset them to return the proper 404 code to them instead.
  6. Robots.txt file problems – Sometimes you place a command in your robots.txt file indicating that you want a certain page blocked, but it gets crawled anyway because the combination of commands you used just didn’t work. Use Google’s testing feature to see how Googlebot will crawl your site with the robots.txt file you’re using and then make adjustments to it as needed.
  7. Outdated XML sitemaps – Sitemaps help search engines crawl the pages you want them to crawl, but many sites generate maps only one time, so it doesn’t take long for them to become outdated as new pages are added or old pages are changed. To fix the problem, use a tool like Map Broker to find all the broken links on your map. Then change the sitemap so that it will update on a regular basis, however often you need it to.
  8. Base64 URLs –Once in a while you might discover that Webmaster Tools is reporting numerous 404 codes, but when you visit the pages on your site, you won’t be able to see the problem immediately. If you’re using Ruby on Rails, there’s a chance the framework could be generating authentication tokens in an attempt to keep cross-site requests from occurring. When Google attempts to crawl each of these tokens, it receives a 404 code. Since these codes are generated as they happen and each one is unique, you won’t be able to find the 404 codes that Webmaster Tools is reporting. You might be able to fix the issue by adding Regex to your robots.txt file. This should get Google to quit crawling the URLs created by those authentication tokens.
  9. Invisible characters in your robots.txt file – Occasionally you may see a warning “Syntax not understood” in your Google Webmaster Tools. When you open the file, there doesn’t seem to be a problem. However, if you use the command line to pull up the file, you’ll see an invisible character that didn’t show up in the actual file. To correct this problem, all you need to do is rewrite your robots.txt file and send it back through the command line to make sure it’s working.
  10. Servers that are misconfigured –Sometimes you might discover that the main homepage on a site is not ranking on the search engines, even though it had been previously. In most cases, browsers send the “accept” header, which shows the types of files it can understand. However, if the server is misconfigured, it will send a content-type that’s a mirror of the very first file in your “accept” header. If this appears to be the problem, change your user agent over to Googlebot.

Google Index Secrets Revealed

Posted by Avasoft Team | Posted in Site Traffic, Website Technical Issues | Posted on 16-08-2012

Tags: , , , , ,

0

Ever since the inception of Googlebot, webmasters have been trying to figure out the secret to Google’s indexing practices.  How do you ever really know if your pages have been indexed by the search engine giant?  Thanks to a new feature on Google’s Webmaster tools, you can.

Index Status, which is found in the Health menu of your tools, shows you exactly how many of your site’s pages have been indexed by Google.  When you click on the feature, you’re taken to a graph that shows how many of your pages Google has indexed.  The goal is for this chart to be steadily moving upward.  This shows you that the new content you are putting on your site is being found, crawled, and then indexed by the search engine giant.  If you don’t see a steadily increasing line on the graph, then it’s time to do some digging into your website and figure out where the problem is.  The first place you’ll go to start figuring out what the issue is will be the Advanced tab in the Index Status feature.

The Advanced tab shows more than just the number of pages from your site that Google has indexed.  It also shows how many pages the search engine’s bot has crawled, how many they are aware of but that have not been crawled because they are blocked from crawling, and the number of pages on your site that Google chose not to include in their results.  Google has flagged several reasons for not indexing certain pages.  Some of these reasons include:

  • Because it redirects to a different page
  • Because it has a canonical tag
  • Or Google’s algorithms think the content is too similar to what’s on a different URL and instead has chosen that other URL to include that content.

So the way Google crawls your site has a lot to do with the way it is set up.  If you use a lot of canonical tags, you could have a real problem in the search engine world.  However, this is something you can easily have fixed if you know what the problem is.  On the other hand, you could be left scratching your head if you don’t have redirects or canonical tags.

Sometimes all you need to do is make it easier for Google to crawl your site by using the search engine’s parameter handling features.  This will let you set which parameters should not be crawled so that Googlebot won’t waste time attempting to crawl those parameters.  Crawl efficiency can make a world of difference on some sites, so it is definitely worth checking out.

One of the easiest ways to help Google figure out why your site is set up a certain way is to submit an XML sitemap.  However, you must make sure that the maps you do submit are comprehensive so that Google will crawl your site the way you want it to.  In addition, it is important to note that the data compiled on the Index Status feature is a couple of weeks behind, so what you will really find the feature useful for is history rather than real time data.  But that history does include some valuable information you can’t afford to ignore.

For help getting your site back on track in Google’s listings, call or email Avasoft today! Our experts will help you discern what all of those numbers in your Index Status actually mean.  Then we will help you use those numbers to boost your site’s performance in the Google rankings.

Penguin Update: Dealing with Google’s Latest Update

Posted by Avasoft Team | Posted in Site Traffic, Website Technical Issues | Posted on 01-06-2012

Tags: , , , , , ,

0

Google has been releasing updates in leaps and bounds over the past couple of years.  One of the search engine’s most controversial updates was last year’s Panda update—until now.  Google’s most recent update was Penguinthat has resulted in websites dropping right off the first page of search results, literally overnight.

So why would Google do such a thing?  It’s all about enforcing link value and creating an algorithm that evaluates the quality of back links rather than the number of them.  It’s all about making sure that the links that count toward your search engine ranking are on sites that are relevant and that haven’t simply used content that was spun from what’s already on your site.

The Penguin update deals almost exclusively with link building, especially:

  • Linking profiles
  • Anchor text profiles
  • Blog network affiliation
  • Diversity of links
  • Duplicate and spun content
  • Content makeup profile and mined content
  • Excessive advertisements
  • Brokerage and paid links
  • Comment spam on blogs
  • Links to forums and posts to sites that are irrelevant to the content on the site
  • Distribution channels and article sites used for back linking
  • Links from so-called “bad neighborhoods”
  • Too much syndicated content

As you can see, even some of the best websites have been hit by Google Penguin and had their search results affected.  Of course this is Google’s way of remaining the authority on providing quality search results for its users, so the best thing website owners can do is figure out how to live with it.  If Penguin has stripped your site of its excellent search engine ranking, here are several things you should look at:

  1. How quickly did your site receive its backlinks?
  2. Is the anchor text you use on your site diverse?
  3. Do you have links from several “bad neighborhoods” in a relatively short amount of time?
  4. Have people been posting blog comments on your site consistently?
  5. Do you use a lot of syndicated content, forums, or article sites to get your back links?
  6. Do you use linking brokers like TextLinkAds?
  7. Has your link profile grown suddenly rather than gradually over time?

The most important thing to take away from these changes is the fact that Google doesn’t penalize your site for what it terms “bad links.”  Instead of penalizing, the search engine simply removes the credibility from links that it deems to be of poor quality or irrelevant to your site.  Of course this spells doom for websites that have already used these practices to gain in the search engine rankings, but it also means that competitors can’t lower your site in the rankings by posting a lot of poor quality links to your site.  Google only takes into account quality links, so if you follow the search engine’s definition of “quality,” you’re as good as gold.

Of course that begs the question of why so many quality sites have taken such a major hit from Google Penguin.  The term “quality” generally means sites that have good content on them and have made good strides in back linking by using social media and other traditionally successful methods like linking through article sites, etc.  Even sites that have been in compliance with Google’s guidelines for webmasters have suffered from this latest update, and it’s entirely because of the way they have done their back linking.

One thing you will notice immediately is that many major online brands haven’t lost their ranking, or if they did, it wasn’t nearly as much as other sites.  This is because these large sites have such a wide array of back links that it matters much less if they have a few bad ones.  In order to determine exactly why your website fell in the search rankings overnight, it’s necessary to look at the good links verses the bad and figure out how much of your link profile is made up of each.  Sites that have a link profile made up of mainly bad links will find a significant drop in the rankings, and it all boils down to basic math.

So how do you figure out whether the links to your site are good or bad?  Here are some examples:

Good Links

  • Those on a site with a cache date that’s updated frequently
  • Those on sites with high quality content
  • Good linking behavior of the site linking to your site
  • Good diversity of the anchor text used when linking to your site
  • Links from sites that are relevant to the content on your site (a.k.a., would they find your site helpful if they followed the link from that other site to yours)
  • Links that haven’t been paid for and were acquired organically

Bad Links

  • Links that have been paid for
  • Links on sites that use bad practices for back links
  • Those on sites that are known to be neighborhoods for back links
  • New links (quality links in good places will add to your website’s ranking in time, so it is still essential that you do back links in places Google finds to be of quality)
  • Links that use the same anchor text to link to your site everywhere
  • Those generated through the use of spun content

Remember that the linking strategy you use dates back to more than five years, so if you’re not sure what the problem is, you may have to go back quite a long time.  In the past, there were several techniques used for back linking that are no longer viable.  These practices include:

  • Reciprocal linking
  • Paying for link ads
  • Links on blog networks
  • Comment, forum, and blog spam
  • Syndicated or spun content that uses consistent anchor text to link back to your site
  • Sponsored ads
  • Link directories
  • Link development that has been outsourced to companies in foreign countries
  • Placing links on sites that are non-geographically relevant, like a site with a domain extension showing that it’s in India (.in) linking to a site with a UK domain extension (.uk)

Your immediate reaction to all of these changes might be to hurry up and remove as many of these poor links as possible, but unfortunately this will only have a marginal impact on your search engine results. The best way to combat Penguin is to create a new strategy for getting those back links so that you can acquire new links to your site organically.  Social media provides an excellent route for acquiring the organic back links your site needs to soar in the search engine rankings.

If you’re still having difficulties determining why your site fell in this round of Google updates, there are other things to take into consideration.  For example, if you have a lot of spun content on your site, you could also be experiencing problems.

The bottom line is, if you want your site to avoid problems with Google updates like Penguin, you just need to focus on providing quality content that’s meant to be read by human beings rather than search engines.  This is the key to long-lasting placement high in the search engine rankings.

Let Avasoft analyze your site and help you deal with Google’s Penguin update.