22
Dec
Matt Garrett asked:

“Arachnophobia” was a U.S. summer blockbuster back in 1990, starring Jeff Daniels and John Goodman, as well as an improbably large special-effects generated spider, which had hitched a ride in the coffin of one of its victims, from the Venezuelan rainforest to a small Californian farming town.

Arachnophobia is also, the clinical term for the fear of spiders. Arachnophobics can also dread getting close to areas which might hide spiders.

One would assume that Arachnophilia would be the clinical opposite of “arachnophobia”, referring to those who collect spiders or raise them as pets. Or perhaps it could be used by the media to refer to the groupies of Tobey Maguire, from his role in the Spiderman movies.

But for the purposes of this article we will use it to refer to those owners of websites who are forever searching for ways to get the search engine spiders to visit their websites..

Many people who have websites do not build them themselves and do not have the fully understand how they are constructed. It’s really fairly simple though..

Everything that is visible in a web browser when you visit a website, including the font size, colors and styles (underline, bold, italic etc.) appears as it does because of “coded” instructions given to it by the site designer.

These standard “codes” are enclosed by pairs of “tags” which tell the Web Browser displaying the site how it should appear to the visitor. These “tags” consist of angle brackets, and >, at the beginning and end of each section of text.

The World Wide Web was the brain child of Tim Berners-Lee, of the European Laboratory for Particle Physics, and it’s creation had nothing to do with particle physics, but was instead designed as a medium to easily store, access, update and ultimatley share vast amounts of data.

Tim Berners-Lee was building on the concept of hypertext. Hypertext, when it was originally created, referred to any text that contains links which allow you to move wherever you want within its parameters, without having to do so in strict sequence.

In 1990 Berners-Lee wrote the first version of the “HyperText Markup Language”, now known simply as “HTML”, which is the code from which all present day text based webpages are made.

So What’s W3C got to do with this?

The links in a web page would not always take the user to the link or data they wished to access, as different formats of hypertext were being used. In other words there was more than one protocol and not all matched up.

So in 1994, to help establish true World Wide Web intercommunication, Berners-Lee and other WWW pioneers established the World Wide Web Consortium, or W3C.

In the past thirteen years W3C has set many voluntary standards for HTML used to build webpages, enabling those website designers who choose to adopt them to build websites which will be accessible by any computer operating system. It’s because of the overwhelming acceptance of these W3C standards that we now have such a reliable and universally useable Internet.

This is why you can view the same web page in many different internet browsers and still make sense of them, whether you are using FireFox, IE, Netscape, Mozilla, or Opera.

W3C HTML has since been enhanced with CSS, and will eventually be surpassed by W3C’s XHTML.

So just how does W3C compliance help to get your website noticed and better indexed for SEO by the search engines?

Well, that’s where we come back to the spiders. If you want to do well in SEO terms, you need to be a good acrophiliac and make your website prime spider-attracting real estate.

Google keeps tags on over eight billion web pages and does so with several different “bots”, aka “crawlers”, aka (to maintain the integrity of our metaphor) “spiders”.

These include DeepBot, FreshBot, MediaBot, AdsBot, ImageBot, GoogleBot-Mobile, and Feed-Fetcher Google, which for some reason has been excluded from the “Bot” club. That means Google has eight different types of spider scuttling around World Wide Web, deciding what is worth adding to it’s 8 billion pages and what’s not..

Now if you were one of the itsy-bitsy spiders assigned to crawl over, examine and make decisions about all those web pages, you might just feel a bit overwhelmed.. If you could rule out some of those pages as not up to scratch for any reason, you might just be tempted to do so, right?

Well if you couldn’t read the content of a page easily, that would probably do it. The spiders have been trained to read W3C HTML (or CSS or XML) code, and if a site is coded in something else, or errors in the code, the spiders aren’t going to like it quite so much and may go looking elsewhere. Which you don’t really want to happen..

So making your website code W3C compliant keeps those spiders happy and encourages them to come back to your site again and again. Hence SEO and W3C go hand in glove, just check the glove for spiders before putting it on. 🙂

Just remember that spiders do not see what a human visitor sees when they look at a your website. Web browsers can make allowances for badly written code and still bring up a page that looks pretty much the way the designer intended it to look. But the spiders get to see the code in the webpage and will know if it’s W3C compliant or not.

In other words if you want maximize your search engine optimization for your site, take the time to verify your website’s compliance with W3C standards.

you can start by submitting your URL to http://validator.w3.org for a check. If your site scores what you think is an unreasonable number of errors, get your website source code gone over and brought into W3C compliance by an expert.

The first, and non-paying, visitors to your website will normally be the search engine spiders, make sure they want to come again. Making your site W3C compliant will give you the best chance of that.

Making W3C compliance part of your SEO strategy will mean your human visitors will not be far behind!

premium wordpress templates

Comments are closed.