Apr 24, 2013

What is google sandbox?

Filled under: ,

The sandbox (Google)
What is google sandbox?

The sandbox is a penalty that affects mainly young sites during the first month or sites little or no assets at their acquisition strategy links. This results in a decrease in the positioning of the site on all keywords, as well as its name. It comes from too much acquiring links from poor quality or net linking very discreet. Example you get in a week 150 links on directories when you do get one or two links via other sites, it is very suspect, the algorithm Google realizes this and penalize you.
 
What to do to get by?

If you are caught in the sandbox, the best way to help yourself is to dilute the mass of suspected links with other links seem natural and come from other types of sites as directories. Do not stop in any case your backlink. Nothing is more suspect than a website that gets a lot of links directories and overnight do get more and do not get any more links to other sites. Always remember to ensure that it is as natural as possible.
 
Duplicate content (Google)

What is the penalty?

This penalty is in fact related to duplicate content you can get on other sites or other sites may take you. We are not dealing here with the filter that applies to a Google site to one or more pages and that is to ignore them. Here we deal with the penalty that applies to sites, which largely consist of duplicate content and Google will simply ignore in its results. The penalty is equal to the sandbox and returns to a loss of traffic via Google 95%.

Google Sandbox: Specific Cases

This course presents in detail the means by which the sandbox can it fight spam that flooded the Google search engine. Against all odds it helps to understand how Google can believe that your site is spam and how to make him understand that your site is perfectly legal and is very good quality
a long tutorial is indicated whether a site is in the sandbox. This tutorial shows this to be a real tool kit to diagnose whether a site is in the Google sandbox or loss of visitor from the famous search engine is only the result of a bad SEO.
 
Sandbox how does it affect your site?

If you have a new site, just beautiful, brand new, it is likely that it is placed in the Sandbox. This should be considered, but should not change the way you build your site or run on the market. You should use the Sandbox filter to your advantage.
Google still ranks more or less the same way as in the past. Websites Websites are judged on the quality of their backlinks and content. Google will continue to change its assessment of backlinks and content, but the basics remain the same positioning.

While your site is in the sandbox, you should use this time to build your own traffic, using approved methods such as writing articles, build a community of visitors (blog, forum ...) and seek partnerships with websites providing synergy to your visitors. During this testing, you have a great opportunity to establish all the elements that a good positioning in search engines. When you finally get out of the sandbox, your site should normally be well positioned in Google.
 
Your site is there in the sandbox?

When webmasters learn about the filter Sandbox, their first question is always if the site has been placed. Determine if your site is in the Sandbox is relatively easy.

First, be placed in the Sandbox does not mean in any case that you are blacklisted (banned from indexes). If you do a search on your domain in Google and it does not return any results you while you were already indexed, there is a high probability that your site is blacklisted. The best way to determine if you have been blacklisted is to look at your log files to see if Google visits your site. In general, the sites visited are not blacklisted by Google, regardless of the sites linked to them.

If you are not blacklisted but your positioning is poor, review the quality of your content and those of your backlinks, you should also scan your positioning for keywords little or no competitive. Remember that the filter affects more sites for competitive keywords. You can use this to determine if you have been sandboxes. Finally, if you position yourself well on all the other major search engines but not on Google, there are chances that your site has been sandboxes.
 
Is there a way out of the sandbox?

Ideally, yes, there is an exit to the Sandbox, but you will not like the answer you just wait. The Sandbox filter is a permanent filter and is only intended to reduce sp @ m search engines, It has not been put in place to prevent sites to succeed, so that if continue to build your site as it should be, you get out of the sandbox and join the established sites.

Similarly, if you’re new site is placed in the Sandbox, pull-in advantage. This is a great opportunity to use that time to set up your external sources of traffic to search engines. If you have a website that ranks well in the search engines, you may be tempted to ignore the other proven methods of traffic generation, such as building your community or make potentially powerful backlinks from your partnerships. If you enjoy this time where your site is in the sandbox, when you come out, your external traffic sources in addition to those from Google, then you will see a significant and welcome increase your traffic levels.

The sandbox is not Blacklist

Although the sandbox is a penalty that applies to all pages of a website, it proves very different from the Blacklist . A site that is Blacklist will at all present in the Google index, while a site in the sandbox will be present in the Google index but strongly downgraded on many queries.

When a site is blacklisted is that it has generally won, because of some questionable practices that Google does not endorse. This is why it is completely de-indexed by Google. However, if a site is small "mistakes" in the beginning, it may not deserve to be totally de-indexed, but Google must ensure that this is not spam; it is why instead it goes into the sandbox to prove himself and possibly correct what is wrong.
 
How to avoid it?

To avoid the sandbox while enjoying blithely directories (I recommend you to do rather than the best, fifty), you will have to at the same time that you register directories get links elsewhere (forums, blogs, digg -like) as well as in nofolow dofollow, In short, everything to make your netlinking like something natural. It is not the number of links that befall the sandbox but that the backlink is not quality.
 
Conclusion

The term Sandbox is not easy for beginners to understand, when research is sometimes happened to me to see that people confuse sandbox and blacklist . The mere fact that the pages of a new website indexed and some webmasters, inexperienced in the subject, exclaim that they are not in the sandbox, so they do not really seem to understand what exactly what the sandbox.

The creation of a site of the highest quality should not normally pose problems. The sandbox can actually make an appearance but the sand removal can be done fairly quickly, in any case it may be faster than if you have a site of poor quality.

Anyway I think it must now face the fact that the sandbox is there and during the first months of life of a site must be with. Take advantage of this sandbox is still there better to do, instead of focusing on SEO a site from the beginning, it is best to focus on content creation and service and be close to its readership to be aware of the needs and expectations of your site.

Posted By Muhammad Asif10:43 PM

Apr 21, 2013

Css Html Tips For W3C Markup Validation

Filled under: , ,

The W3C system serves to improve the quality of your blog source code resulting in a better understanding of all media that make reading the code (browsers, smart phones, simulators, web crawlers, etc.). Error checking and help solve conflicts that make a website less
Css Html Tips For W3C Markup Validation
accessible to search engines and browsers.

It is not the intention of this article to teach validate in XHTML, recommending only common faults that can be resolved easily. The tips presented here are suitable to the standards of XHTML Transitional.

An important fact to the code is time to forget not close anything. For this, the ideal is to establish a hierarchy and an order with comments on where it begins and where it ends each zone or div. In XHTML you can not leave anything unlocked. What complicates things is that there are two ways to close an HTML tag:

Never forget the ALT attribute on images. 's important to explain to the search engines what they are images. Something that everyone always forgets to call when an image is to the ALT attribute in it is obligatory, because if the image does not load the text would show ALT (alternative).

Users with visual disabilities use certain systems on their computers reading the ALT attribute and tell them what is that image and, more importantly, is that search engines use to index the images ALT. It is an essential point in the SEO.

Labels can not be capitalized. All tags must be written without CAPS otherwise you will have some optimization failures.

On the other hand, another important element as we said is to respect the hierarchy of HTML. There is a certain hierarchy within the HTML code. For example, the span may not be within the tags P (paragraphs). When you need a little style in certain parts of the text you need to use the

SPAN tag.

This post has a vast content and very thorough. So was divided into parts to explain every detail. It is a very complex issue, but extremely important for your blog:

  • Introduction: about HTML and other code
  • Validating XHTML W3C in your Blog
  • The accused W3C errors in the code of your Blog?
  • My blog does not validate the W3C!
  • Where can I learn more about HTML?

Introduction: about HTML and other code

In the existence of the internet, there were several code languages ​​that only scientists could understand; the internet was used exclusively by them, who used it to communicate.

In the early '90s with the rise of the internet HTML can be used in a simpler way, and any user can manipulate the code and understand. The HTML has since become the most common language for the exchange of information on the internet.
Many people do not know, but the HTML is not a programming language but of marking, where we use it just to mark the text and do the formatting. HTML signified “Hypertext Markup Language ", which translated into Portuguese:" Hypertext Markup Language ".
With the invention of HTML and it being increasingly used, the need arose for its standardization, for each user as well understood and used every browser differently. It was formed when the World Wide Web Consortium (W3C ) in the mid-90s, a consortium that includes members worldwide, from printed private as government agencies.

Cares this consortium of standardization and best practices on the use of languages ​​like HTML, XML, XHTML and even images as PNG, JPG, etc...

XHTML is an invention of the W3C, which is intended to make the language more practical for both those who are developing a project to the web as browsers and search engines.

Just when we started to develop a site in XHTML need to write your DOCTYPE, this is what determines how it will be all the rest of the code in your project, because it is from the browser or search engine will start reading.

Exist in the version of XHTML 1.0 that is the most used models today DOCTYPE:

Strict: It is more restricted form of XHTML, accepts less code (e.g. not accept <center>, inline CSS, etc.), but it is language that makes your code more cleanly, making it easier to read by your browser and motor search.
Transitional: This is the most used, a little more flexible than Strict, accepting more codes also enable clean code.
Frameset: It is no longer used very often; the frameset is for websites created just for iframes.

DOCTYPE example:

<! DOCTYPE html PUBLIC "- / / W3C / / DTD XHTML 1.0 Transitional / / EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

There are also standardizations for HTML 4.01 which meant forms Strict, Transitional and Frameset.

In recent years has released version 5 of HTML, it brings improvements in all senses, the search engines understand much better your blog with HTML 5, the better your indexing information. But some people are on the back foot of the Andes use it because it is something new and older browsers are not able to understand certain things in HTML 5.

But if you have the opportunity to develop using it for sure will be a good investment as I said in this post, the HTML is just a markup language, older browsers will understand your markup, but will not be able to assign CSS styles to these markings, nothing that a little creativity can not solve...

Validating XTHML your blog at W3C.

Validate your blog does not mean that the W3C validated after the code will always be validated, what happens is that the W3C just checks and warns you if everything is ok or if there are errors in your code.

To validate you can use the following link: http://validator.w3.org/

There are 3 options for you to validate your code. Can be validated by the URL (address of your blog), File Upload. Html or simply paste the entire HTML code and send check.

These options are divided by tabs at the W3C Web site:

The W3C will detect your DOCTYPE HTML and make verification through it.

The accused W3C errors in the code of your blog?

Surely many of you do the scan now will feel frustrated because she accused hundreds of errors in your code.

I often say that there are two types of errors in time to validate your site, one is for pure and the other caused by “custom HTML ".

  • Pure: You wrote your code in the wrong way, leaving him broken and ignoring the good practices and the hierarchy of the code.
  • Custom HTML: For certain projects it is necessary to create types of "custom HTML" to JavaScript’s, or PHPs etc, find it that part of the code, bringing certain features. This custom HTML to be not part of the context of the W3C nor browsers and search engines.


Whenever possible avoid using custom HTML, it is not legal for the browser and not for the search engine, as a matter of simple SEO, this kind of HTML will not be well read then it should be avoided.

Tools Blog right here on today, we have 4 errors caused by custom HTML, and two errors are caused by JavaScript counter of Twitter and other two because of the source of the Addthis to share the posts.

Yet had time to develop something to get around this but soon we will be with the code Tools Blog fully validated.

Previously there were over 200 errors, though it seems to be a big improvement is still not our ideal.

As for the W3C already own back tips on how best time of validation (for the custom he will tell you that there is something wrong and who is there). Use these tips to improve your code.

My Blogger does not validate the W3C!

Your Blogger does not validate the W3C because there are some flaws in the source of the service, especially if you use old themes.

Many people do not know, but Blogger was not invented by Google and yes invented by Pyra Labs in 1999 (at that time there were even plans paid for Blogger) only in 2003 that the giant Google bought leaving it completely free.

I believe that in the midst of this transition was left some things behind, leaving the source of some glitches with Blogger and dirt early version, which requires the use of several customs HTML, and template system defaults even have hundreds of errors time to validate.

No use to deceive and think, “If Google is the Google Search will be able to easily read my blog, "Because you are mistaken.

This is not because the result of Google Search is organic and without interference generated directly or indirectly from Google.

As I said in this post, the validation is not just a matter of good practice, but also a question of SEO.

The new versions of templates launched by Blogger had some improvements in this direction too, but much remains to make the appropriate code of good practices W3C.

Always remembering that it is possible to have a quality blog on Blogger.

Where can I learn more about HTML?

The best way to avoid these errors is constant and profound study on best practices and uses of HTML and XHTML.

Strongly recommend that you study this following site:
W3Schools - The best and most comprehensive website on the subject in the world.

Posted By Muhammad Asif1:19 PM

Apr 20, 2013

W3C Validation and its importance

Filled under: , ,

W3C Validation and its importance

W3C or World Wide Consortium is an organization that governs the standards and recommendations for the development of the website. So in summary are a lot of standards for a well done website. This does not mean that if you do not meet the standards your site is not well made.

Validate a website is easy, just go to this page ( W3C Validator ) and enter the address of your blog or website. I will say that many errors have. The last revision I made ​​to my blog had only 3 errors that I have corrected. The report tells, this document was successfully checked as HTML5!, in future I'll tell them how to fix it.

When they are doing the review one is going to find a small number of errors, less than 50, and others with excessive quantities, more than 500. Whatever the numbers do not panic, sometimes to correct a silly little mistake is corrected more than one at a time. Take it easy. Some errors are easy to find and not so. Some errors are in the same post they write, there may be errors in widgets that use their website or if for example are using wordpress may find errors in your theme files.

And serving them correct?

Two reasons. To meet the standard, which is not bad. And to raise a little bit your Positioning in google and other search engines. Remember that loves google code pages with clear and clean, and one way to do that is sure to meet the standards of the W3C. And take note that I am not saying that if they have 0 errors visits will increase a lot but rather improved by a few percent. According to my philosophy that small percentage is worth the trouble to fix these errors

About this I have read several theories that say things like "do not need to validate the entire site or page but only the first half" ... mmm ... maybe. But the truth calmly and patiently can greatly reduce errors. Needless to fix everything in one day.

I would like to know the results of you reading this post ... if you can leave me your comments with the result that test them.

Posted By Muhammad Asif4:27 AM

Apr 5, 2013

The robots.txt file

Filled under: ,


The robots.txt fileThe robots.txt file is a rule according to which the search engine will index your site. Where you want the file robots.txt? The record format and syntax, which supports the file robots.txt. How to use meta-tag «ROBOTS»? What non-standard methods of management by search engines exist? How to avoid gross errors in compiling the file robots.txt? Here is the list of issues raised by this article.

And so, here we go...

Appointment of a robots.txt file

The robots.txt file exists for a long time. Back in 1994, there is an agreement about its use. This is a plain text file containing instructions clear search engines. For those who do not know:

  • Search robot is a software search engine to index the documents published on the Internet.
  • Indexing is the process of adding information about the site in the search engine database.
  • Indexing is necessary to quickly find the information you need on the Web search engine users:  Google, Rambler, Yahoo, MSN, etc.
In fact the instructions given in the robots.txt file are generally reduced only to what to tell the search engine what files and directories site is not indexed, I.e. not to introduce into their database, Any site contains directories and files that do not contain useful information for network users. Their indexing can cause additional load on the server and even harm a site with rankings in the search results.

The robots.txt file blocks access to such robots directories and files than providing an invaluable service to all, usually do not index directories with scripts, such as «cgi-bin» and other software directories. Other directories and files that contain proprietary and other information are not intended to be indexed.

The format of the robots.txt file

To start the search engine to index your site enough to create an empty robots.txt file and place it in the root folder of your website. It was there that he will seek a search engine robot. The path to the file should be:
The robots.txt file
The robots.txt file must be named only and not otherwise, the name in lower case. File located in the root of your site. Empty file allows to indexing all the content of your website to all search engines, Just worth to mention that the robots.txt file in any case does not prevent access to content and has only advisory functions, If the robot instructed to inspect all of the directories - it will ignore all the taboos recommendations and knows how to go.

Syntax

To recommend any robot not to index a particular directory, one or more records of office ending in a newline (CR, CR / NL or NL). If multiple rows are separated by one or more blank lines. Each entry must contain the lines (lines) of the following form:
The robots.txt file
Where the field is for <field> directives are not case sensitive input characters, and <value> - value taken to the execution of the directive. Directives are not many: Use-agent, Disallow, Host, and Sitemap.

The robots.txt file can include comments starting with "#" and ending with the end of the line.

User-Agent

The entry should begin with one or more rows with the value «User-agent».

  • The value of this field is the name of the robot, which the access rights.
  • If multiple entries of robots, the rights will be the same for everyone.
  • If the value of this field to specify the symbol "*", the rules will apply absolutely to all search engines.

Disallow

The following is one or more lines with the directive «Disallow».

The robots.txt file
Record (record) must contain at least one row (line) «User-agent» and one line «Disallow».

Examples of the robots.txt file

Example 1:

The robots.txt file


In Example 1 is closed from indexing the contents of the directory / cgi-bin/script / and / tmp /.

Example 2:
The robots.txt file
Example 2 is closed from indexing the contents of the directory / tmp /, but a spider powersearch everything is permitted.

Example 3:
The robots.txt file
In Example 3, prohibits any search engine to index your entire site.

Host

Directive «Host» is used only in the case of the robot Google The rest of the robots it is "in the drum."Enter your robots.txt file that line where you have to specify the name of your website, which will point to its main mirror. A good thing will help to avoid problems with bonding, putting up mirrors. In spite of the fact that if you want to allow Google index a site completely - MUST BE RECORDED AT LEAST ONE LINE WITH DIRECTIVE «DISALLOW»:

An example of a robot Google:
The robots.txt file

Sitemap

This directive tells search engines to locate a clear site map, Site Map useful when your website contains thousands of pages. This helps the search engine to index it more quickly. If necessary, add the following lines to your robots.txt:
The robots.txt file

Examples of the use of the robots.txt file

File Location
The robots.txt file

Example: Disable the entire site to be indexed by all the robots
The robots.txt file

Example: Allow all robots to index the entire site
The robots.txt file

Or you can just create an empty file robots.txt.

Example: Close by indexing only a few directories
The robots.txt file

Example: Prevent indexing of the site for only one robot
The robots.txt file

Example: Allow indexing of the site and one robot to disable all other
The robots.txt file

Example: Disable for indexing all files except one

Not an easy task, instructions «Allow» does not exist. Move all files except the one you want to allow for indexing in the directory and disable it from being indexed:
The robots.txt file

The second option - disable each file individually:
The robots.txt file

An example of a robots.txt file for Wordpress blog

The robots.txt file

Meta tag ROBOTS

There are times when it is necessary to disallow any page. This is done using a meta-tag «ROBOTS».
In this simple example:

META NAME = "ROBOTS" CONTENT = "NOINDEX, NOFOLLOW"

Robot should neither index the document, nor analyze facing the shortcuts.
Unlike the Robot Exclusion Standard that the restriction of access rights to the site from its administrator, you can do it yourself.

Where to place the meta tag ROBOTS:
The robots.txt file

Just a few examples of the use:
The robots.txt file

Non-standard methods of management by search engines


The robots.txt file can limit access to search engines and directories at the site files. ROBOTS meta tag on the page level. What if the task is to prevent indexing only the text or links on the page, For this, there are tag noindex and attribute rel = "nofollow" tag A.

Example:
The robots.txt file

In the example, the tag noindex we offer search engines Google and Rambler second sentence is not indexed, and attribute nofollow talking robot Google does not follow this link. The attribute rel = "nofollow" can be used both before and after the URL and share with other attributes «rel» written contract with a space. Google's robots do not understand the tag noindex and also such use violates the validity of html-code of the page. If there is a need for it not to break, it is recommended to use the following syntax of their writing:

The robots.txt file

Other methods of control indexing

There are other methods of search engines to block access to content sites, For example using the module Web Server «mod_rewrite», programmatically using Javascript or file, Htaccess. In the future we will address these issues.

Accepted additions, comments. Comments are welcome. See you soon!

Posted By Muhammad Asif5:27 PM

Apr 2, 2013

Affect of dofollow links on pagerank

Filled under: ,


Affect of dofollow links on pagerank

If you are in running your own website, you probably already know what dofollow blogs. And for those who are not in the subject, I will explain this point in the article.
First, it is worth noting that any progress in the tops of search results is reduced to banal link building. We mean a set of external links. This, of course, takes into account the similarity of the thematic and regional resources and more.

Some search engines have become a clever algorithm to begin to sift purchased links. Some of them, they do not count for ranking. So what do you do if you can not buy links, do not have other options to get back links?
Course there is! It is true they are not as easy and quick as buying links, but, but they're free, which is important for novice bloggers. Just in time for this method and is dofollow blog commenting.

What is dofollow blogs?

A link Dofollow is followed by Google robot; this affects the PageRank of the page positively or negatively depending on the quality of the links. Most of the sites are enabled by default on Dofollow for the entire structure of the page, except in the comments.
HTML Dofollow

This is a Dofollow link.
 <a href='http://example.com'> example </ a> 
This is a blog where authors deliberately offer for indexing links in comments.

Returning to the subject of promotion of sites links, I must say that dofollow blog commenting is one way to get the fat link to the resource. They say that the comments on these blogs with an open link, tightly moderated, but, in my experience, it is not. Even they often come across spam and advertising, as it hints that our attitude to the authors for their creations.

Negative side:
· Increases the number of empty and impersonal comment.
· The need for continued moderation of comments and, as a consequence, the lack of direct dialogue in the current time, including between commenting.
· Other things being equal commentator rarely becomes a subscriber, because the objective is different, though often I see the opposite - a man who has come for the link, reads everything from cover to cover, and gives his opinion is obviously interested in that comment. This shows the modified causation: instead read and commented on - came for a comment (link) and read.

Output: 
Blog DoFollow is profitable until a certain time. With the growth of traffic - this is more a burden than a benefit (you can see in blogs, at one point closing "nofollow" or taking down a link to the author at all, leaving the opportunity to express an opinion without the possibility of self-PR). Although the more I tend to close comments on "nofollow", to eliminate the selfish motives and finally hear from anyone who wants to respond to the content, with the ability to go to the website of the author on NoFollow link.

How to distinguish from dofolow nofollow?

Strangely, many are asking the question. Learn, open or closed links Robots can own, use your browser firefox go to someone else's blog, select the nickname commentator and on the selection, right-click the mouse, where we select "source selection."And then look at a picture in the source:

In the screenshot, we see that the link has the attribute rel = "external nofollow", from which we conclude that it is not dofollow blog, otherwise this would be an attribute missing. Now, as promised, the list of blogs that was open. The data are from only Google Webmaster. List has the following structure: address, underneath PR ↓

Should I take the comments DoFollow? 

Since the beginning of my blogging adventure, I have always hesitated to attract commentator's links from my comments DoFollow, but I've never done because I always thought there would be a turnaround in my favor. But now for the sake of visibility and interaction with my readers, I think more to go over these concerns.

I hesitate, because for quite some months I learned of the existence of a multitude of robots capable of targeting DoFollow blogs and spammers (passing systems captchas, etc.).. 
I live moderation evil a hundred comments cans per day. I am more skeptical about adding a lot of outbound links on my blog. Indeed, it could discredit my site and content. Search engines supervisors increasingly the practice, I'm afraid of being blacklisted!

In short, I do not know what position to adopt, although I want to have commentators, I do not want to suffer the penalty from Google. How do you advise me? Are you Dofollow, do you feel a difference in your rankings? How you're doing with spammers?

Dofollow / Nofollow in comments 

Leave a comment on a page that has a PR8 is something we all seek, but it happens to the vulnerable page to fill quedaries pointless comments "SPAM" for this reason began to add the attribute Nofollow links for comments, which Google no longer supposed to follow those links, so the SPAM decrease.

Despite this, it is not very clear attribute linking and Nofollow PageRank, we can say this because you share evidence, where the comments are Nofollow site.

This is a site that has PR2, but salamander PageRank have this in an article, with the difference that this article tiene101 comments, investigating the links left in the comments, some have PR6, PR5 and PR4. Then the rest room has PR1, we let the screenshots that were beneath them and a link so that you check for yourself.
Entry page with PageRank 2

The entry has 101 comments with links are nofollow , but apparently Google are ignored, this can be noted that the entry has a PageRank 2.

PageRank 2

PageRank 2 only in this entry has 101 comments.

Full Page PageRank 1 

Is the page that corresponds to the previous post, but this is the full product, unlike the previous post, this one has PageRank 1 .
 PageRank 1

The page has PageRank 1

Conclusion

Will there really difference between Follow and Nofollow?

Evidence shows that despite being Nofollow, positively affect the PageRank of the page. 

Posted By Muhammad Asif3:08 AM