Meta Tags, Meta Robots, and Robots.txt

 

What is Meta Tag?

Meta elements are HTML or XHTML elements used to provide structured metadata about a Web page. Such elements must be placed as tags in website head element.

The meta tag has two uses: either to emulate the use of the HTTP response header, or to embed additional metadata within the HTML document.

Example of the use of the Meta Tag


<meta http-equiv="Content-Type" content="text/html" >
<meta name="keywords" content="Your Website Keywords" >
<meta name="Description" content="Your Website Description" > 


How Meta  Robots used in search engine optimization?

Meta tag provide information about a given Web page, most often to help search engines spider categorize them correctly. They are embed into the HTML document, but are often not directly visible to a user visiting the site.

They have been the focus of a field of search engine optimization, where different methods are explored to provide a user's site with a higher ranking on search engines organic ranking. In the mid to late 1990s, search engines bot were reliant on meta tag to correctly classify a Web page and webmasters quickly learned the commercial significance of having the right meta element, as it frequently led to a high ranking in the search engines — and thus, high traffic to the website.

As search engine organic traffic achieved greater significance in internet marketing plans, consultants were brought in who were well versed in search engine behavior for seo. These consultants used a variety of techniques to improve ranking for their clients websites.

While search engine optimization can improve search engine organic ranking, consumers of such services should be careful to employ only reputable providers. Given the extraordinary competition and man who practices a craft with great skills required for top search engine organic placement, the implication of the term "search engine optimization" has Become progressively worse over the last decade. Where it once implied bringing a website to the top of a search engine's organic results page, for some consumers it now implies a relationship with keyword spamming or optimizing a site's internal search engine for improved performance.

Major search engine crawler are more likely to quantify such extant factors as the volume of incoming links from related websites, quantity and quality of content, technical precision of source code, spelling, functional v. broken hyperlinks, volume and consistency of searches and/or viewer traffic, time within website, page views, revisits, click-through, technical user-features, uniqueness, redundancy, relevance, advertising revenue yield, freshness, geography, language and other intrinsic characteristics.

Useful Meta  Robots for SEO

  • author
    Who wrote this Web page? You can include a list of authors if multiple people wrote the content and it typically refers to the content authors rather than the designers of the HTML or CSS.
    <meta name="author" content="author name" />
     
  • copyright
    Set the copyright date on the document. Note, you shouldn't use this instead of a copyright notice that is visible on the Web page, but it's a good place to store the copyright in the code as well.
    <meta name="copyright" content="© 2008 Jennifer Kyrnin and About.com" />
     
  • contact
    This is a contact email address for the author of the page (generally). Be aware that if you put an email address in this tag, it can be read by spammers, so be sure to protect your email address.
    <meta name="contact" content="email address" />
     
  • last-modified
    When was this document last edited?
    <meta http-equiv="last-modified" content="YYYY-MM-DD@hh:mm:ss TMZ" />

Meta  Robots for Communicating with the Web Browser or Server

These meta tags provide information to the Web server and any Web browsers that visit the page. In many cases, the browsers and servers can take action based on these meta tags.
  • cache-control
    Control how your pages are cached. The options you have are: public (default) - allows the page to be cached; private - the page may only be cached in private caches; no-cache - the page should never be cached; no-store - the page may be cached but not archived.
    <meta http-equiv="cache-control" content="no-cache" />
     
  • content-language
    Define the natural language(s) used on the Web page. Use the ISO 639-1 language codes. Separate multiple languages with commas.
    <meta http-equiv="content-language" content="en,fr" />
     
  • content-type
    This meta tag defines the character set that is used on this Web page. Unless you know that you're using a different charset, I recommend you set your Web pages to use UTF-8.
    <meta http-equiv="content-type" content="text/html;charset=utf-8" />
     
  • expires
    If the content of your page has an expiration date, you can specify this in your meta data. This is most often used by servers and browsers that cache content. If the content is expired, they will load the page from the server rather than the cache. To force this, you should set the value to "0", otherwise use the format YYYY-MM-DD@hh:mm:ss TMZ.
    <meta http-equiv="expires" content="0" />
     
  • pragma
    The pragma meta tag is the other cache control tag you should use if you don't want your Web page cached. You should use both meta tags to prevent your Web page being cached.
    <meta http-equiv="pragma" content="no-cache" />

Control Robots with Meta  Robots

There are two meta tags that can help you control how Web robots access your Web page.
  • robots
    This tag tells the Web robots whether they are allowed to index and archive this Web page. You can include any or all of the following keywords (separated by commas) to control what the robots do: all (default) - the robots can do anything on the page; none - robots can do nothing; index - robots should include this page in the index; noindex - robots should not include this page in the index; follow - robots should follow the links on this page; nofollow - robots should not follow links on this page; noarchive - Google uses this to prevent the page from being archived.
    <meta name="robots" content="noindex,nofollow" />
     
  • googlebot
    Google has their own robot - GoogleBot, and they would prefer that you use the googlebot meta tag to control the Googlebot. You can use the following keywords to control the Googlebot: noarchive - Google will not display cached content; nosnippet - Google will not display excerpts or cached content; noindex - Google will not index the page; nofollow - Google will not follow the links on the page.
    <meta name="googlebot" content="nosnippet,nofollow" />
     

What is Robots.txt?

Web site owners use the robots.txt file to give instructions about their site to web robots/crawler/bot; this is called The Robots Exclusion Protocol(REP).

It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds:

Handy Robots.txt Cheat Sheet


This example tells all robots to visit all files because the wildcard * specifies all robots:

User-agent: *
Disallow:

This example tells all robots to stay out of a website:

User-agent: *
Disallow: /

The next is an example that tells all robots not to enter four directories of a website:

User-agent: *
Disallow: /cgi-bin/
Disallow: /images/
Disallow: /tmp/
Disallow: /private/

Example that tells a specific robot not to enter one specific directory:

User-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot
Disallow: /private/

Example that tells all robots not to enter one specific file:

User-agent: *
Disallow: /directory/file.html

Note that all other files in the specified directory will be processed.

Example demonstrating how comments can be used:

# Comments appear after the "#" symbol at the start of a line, or after a directive User-agent: * # match all bots
Disallow: / # keep them out

Example demonstrating how to add the parameter to tell bots where the Sitemap is located
User-agent: *
Sitemap: http://www.example.com/sitemap.xml  # tell the bots where your sitemap is located

Nonstandard extensions
Crawl-delay directive

Several major crawlers support a Crawl-delay parameter, set to the number of seconds to wait between successive requests to the same server:[4][5][6]

User-agent: *
Crawl-delay: 10
Allow directive

Some major crawlers support an Allow directive which can counteract a following Disallow directive. This is useful when one tells robots to avoid an entire directory but still wants some HTML documents in that directory crawled and indexed. While by standard implementation the first matching robots.txt pattern always wins, Google's implementation differs in that Allow patterns with equal or more characters in the directive path win over a matching Disallow pattern. Bing uses the Allow or Disallow directive which is the most specific.

In order to be compatible to all robots, if one wants to allow single files inside an otherwise disallowed directory, it is necessary to place the Allow directive(s) first, followed by the Disallow, for example:

Allow: /folder1/myfile.html
Disallow: /folder1/

This example will Disallow anything in /folder1/ except /folder1/myfile.html, since the latter will match first. In case of Google, though, the order is not important.


Sitemap
Some crawlers support a Sitemap directive, allowing multiple Sitemaps in the same robots.txt in the form:

Sitemap: http://www.gstatic.com/s2/sitemaps/profiles-sitemap.xml
Sitemap: http://www.google.com/hostednews/sitemap_index.xml


Universal "*" match
The Robot Exclusion Standard does not mention anything about the "*" character in the Disallow: statement. Some crawlers like Googlebot and Slurp recognize strings containing "*", while MSNbot and Teoma interpret it in different ways.

Magento SEO - The Definitive Guide By Yoast

Magento SEO

It's now almost a year after the world saw the first stable release of Magento, and there was still no "definitive guide" to Magento SEO. A lot has been written on the subject, in the Magento forum and some blog posts, but nothing that gives a complete overview of this subject. It's time to let all this knowledge and experience fall into one big piece; the definitive guide to Magento SEO.
As search, SEO, and last but not least the Magento platform evolve, we will keep this Magento SEO article up to date with tips, tricks & best practices. Because Magento, though pretty stable, hasn't matured yet, the best practice is to stay updated with the latest release, at moment of writing 1.4.
Need your website reviewed?
If you need an outside look at your Magento install, you might consider our Website Review. The results of this review contain a full report of improvements for your site, encompassing my findings for improvements in different key areas such as SEO to Usability to Site Speed & more. For the cost of €595 (which is less than $800), you'll receive a report that represents a much larger value in turn-over and profit!
Table of contents

1. Basic technical optimization

1.1. General Configuration

Magento is one of the most search engine friendly e-commerce platforms straight out of the box, but there are several known issues that can be taken care of to optimize your Magento SEO. The first step is to get the most recent release, 1.2.1. Then, to get started, enable Server URL rewrites. You will find this setting under System => Configuration => Web => Search Engines Optimization. Another good thing to configure now you are on this screen is "Add store Code to Urls" under "Url Options". In most cases it is better to set this functionality to "No".

1.1.1. WWW vs non-WWW

Under "Unsecure" and "Secure" you can find the Base URL, where you can set the preferred domain. You can choose between the www and the non-www version of the URL. With changing the setting you don't create a redirect from www to non-www or non-www to www but set only the preferred one. Therefore it is a good idea to create a 301 redirect through .htaccess with mod_rewrite. Besides solving the WWW vs non-WWW problem this redirect prevents Magento from adding the SID query to your URLs, like ?SID=b9c95150f7f70d6e77ad070259afa15d. Make sure the Base URL is the same as redirect. When editing the .htaccess file you can add the following code to redirect index.php to root.
Around line 119:
1RewriteBase / RewriteCond %{THE_REQUEST} ^[A-Z]{3,9} /index.php HTTP/
2RewriteRule ^index.php$ http://www.mydomain.com/ [R=301,L]
Or, when your Magento install is not in the root but in the sub-directory http://www.mydomain.com/magento/:
1RewriteBase /magento/ RewriteCond %{THE_REQUEST} ^[A-Z]{3,9} /magento/index.php HTTP/
2RewriteRule ^index.php$ http://www.mydomain.com/magento/ [R=301,L]

1.2. Header Settings

By default your Magento install has the title "Magento Commerce". For your Magento shop to get the traffic it deserves you should keep at your mind:
  • Search engines put more weight on the early words, so if your keywords are near the start of the page title you are more likely to rank well.
  • People scanning result pages see the early words first. If your keywords are at the start of your listing your page is more likely to get clicked on.
First of all you should get rid off the default title "Magento Commerce". Go to Configuration => Design => HTML Head. Choose a good and descriptive title for your website. This title will be used for several non-content pages without custom title, e.g. "Contact Us" and the "Popular Search Terms".
To add your store name to all page titles, including categories and products, put your store name in "Title Suffix". It is a better idea to keep the Prefix empty, for the reasons mentioned above. Also keep "Default Description" and "Default Keywords" empty. For a non-production environment, to prevent indexing of the site, it may be useful to set "Default Robots" to "NOINDEX, NOFOLLOW" but for all other applications make sure it is set to "INDEX, FOLLOW".
Now we are optimizing the <head> of your web-store pages it is a good idea to add the new canonical tag. You can install the Canonical URL's for Magento Module to add them to your head and improve your Magento SEO.
For some reason Magento turns non-set meta robots into a meta tag in this style:
<meta name="robots" content="*" />
This can result in some very strange behavior in the search engines, so we'll remove it. To remove this empty metas from your code install the Yoast MetaRobots Module.

1.3. CMS Pages

At first sight Magento may lack some descent CMS functionality, but for most uses it will be flexible and powerful enough. One of the benefits of this simple CMS is that you can control each aspect of the pages. Once you've given each CMS page some decent content, pick a SEF URL Identifier and page title, (while keeping in mind the points under 1.2), and go to the Meta Data tab to write a description for each CMS page that you actually want to rank with.
You can keep the "Keywords" empty. The description has one very important function: enticing people to click, so make sure it states what's in the page they're clicking towards, and that it gets their attention. Thus, the only well written description is a hand written one, and if you're thinking of auto generating the meta description, you might as well not do anything and let the search engine control the snippet...
If you don't use the meta description, the search engine will find the keyword searched for in your document, and automatically pick a string around that, which gives you a bolded word or two in the results page.

1.4. Category optimization

Magento gives you the ability to add the name of categories to path for product URL's. Because Magento doesn't support this functionality very well - it creates duplicate content issues - it is a very good idea to disable this. To do this, go to System => Configuration => Catalog => Search Engine Optimization and set "Use categories path for product URL's to "no".
Now it's time to set the details for each category. Go to Catalog => Manage Categories. The most important fields are:
  • Meta Description: put an attractive description here; Keep in mind that people will see the description in the result listings of search engines.
  • Page Title: keep this empty to use the category name including parents categories. When you customize it, the title will be exactly like your input, without the parent category.
  • URL Key: try to keep a short but keyword rich URL. Removing stop words like "the", "and", "for" etc. is usually a good idea. Also note that you can set this only for all store views, for a multi-language store you should keep it language independent.
For each store view you can specify the Name, Description, Page Title and Meta data. For multi-language stores this is really a great feature.

1.5. Products optimization

Optimization of the Products pages is similar to Categories. You can set the Meta Information for the "Default Values" and for each "Store View". Note that for the "Meta Title", this will overwrite the complete page title, including categories but except title prefix/suffix, and not just the product name.
An often-overlooked aspect of Magento SEO is how you handle your images. By for instance writing good alt tags for images and thinking of how you name the image files, you can get a nice bit of extra traffic from the different image search engines. Next to that, you're helping out your lesser able readers who check out your site in a screen reader, to make sense of what's otherwise hidden to them.
By default the images will be renamed to the product title, the same for titles and alt tags. With some extra effort you can set the titles and alt tags for each product image. Under the tab "Images" of the Product Information you can set the label for each product image, this value will be used for the alt and title tag. Of course you can do this for each specified Store View as well.

2. Magento Template Optimization

2.1. Optimized Blank Template

The default Magento skins like "Default Theme", "Blue Skin" and "Modern Theme" don't do a very good job in the use of headings, so from an SEO perspective, there is a lot of room for improvement there. To make it easy on you, we have developed a Blank Magento SEO Theme, based on the core Magento Blank Theme, which incorporates all the things we've outlined below. You can download and discuss it here.

2.2. Headings

By default the logo is an <h1>, which is should only be on the front page, and on all other pages it should be no more than an <h3>. The most important thing is to get the title of the content in an <h1> tag, e.g. for a category page should it be the category name and for a product the product name.
The next step is to clean up the over usage of headings. It's a good idea to get rid off the header usage in the side columns, or make the text relevant to the shop (ie. include keywords). There is no reason to add "static" and keyword less titles with an <h4>. It is, for instance, better to change all the <h4> tags in <div class="head"> to <strong> tags. Now it is time to optimize your content, at the category pages put the product names in a <h3> and the category name in a <h1>. On product pages, you should put the product name in an <h1>.
To learn more about why proper headings are important read this article on Semantic HTML and SEO.

2.3. Clean up your code

All that javascript and CSS you might have in your template files, move that to external javascripts and css files, and keep your templates clean, as they're not doing your Magento SEO any good. This makes sure your users can cache those files on first load, and search engines don't have to download them most of the time.

2.4. Aim for speed

A very important factor in how many pages a search engine will spider on your shop each day, is how speedy your shop loads.
You can do two things to increase the speed of your Magento install:
  1. Enable caching. Go to System => Cache Management and enable all caching features, like this.
  2. The importance of a good host and server config. With MySQL and PHP opcode cache you can improve the speed of Magento dramatically.
NOTE: there is a rumor that with the 1.3 release of Magento new functionality will be added with huge performance improvements.
Another thing to look for is the number of external files. For each file you make people download, their browser has to create another connection to the webserver. So it is a very good idea to reduce the number of external files and combine several external files in to one. By default Magento already combines (almost) all javascript files into one file.
It doesn't do this for stylesheets though: the default template has 6 different stylesheet files. You can combine the content of these stylesheets into one new one, except for the print.css file, or you can use the Fooman Speedster module. Besides combining files, this module also compresses and caches your javascript and stylesheet files. (Please note the requirements for Speedster: mod_rewrite has to be enabled & and your server needs to have .htaccess support. If you use Canonical URLs for Magento and Fooman Speedster together, you need to overwrite the Canonical module with this download.

3. Advanced Magento SEO and Duplicate Content

Once you have done all the basic stuff you will find the rest of the problems amount to one simple thing: duplicate content. Loads of it in fact. For products you have, at least, the following URLs with exact the same content:
  • domain.com/product.html
  • domain.com/category1/product.html
  • domain.com/catalog/product/view/id/1/
  • domain.com/catalog/product/view/id/1/category/1/
Besides that you have pages like the product review pages with almost the same content. Another problem are categories, you get a load of duplicate content with layered navigation and the sorting options. In essence that means that, worst case scenario, a product is available on 4 pages at least next to the page where it should be available.
We're going to get rid of all those duplicate content pools, by still allowing them to be spidered but not indexed and fixing the sorting options and layered navigation for categories.

3.1. Noindex, follow for non-content pages

Install the Yoast robots meta module and make sure the settings prevent indexing of all non-content pages, like this:
yoast robots meta: optimizes your Magento SEO
Now the search engine will follow all links on these pages but it won't show those pages in the index.
Another easy step to increase your Magento SEO is to stop linking to your login, checkout, wishlist, and all other non-content pages. The same goes for your RSS feeds, layered navigation, add to wishlist, add to compare etc. Still there is no plugin for Magento to work this around. You had probably have to go into your template files to add nofollow to those links by hand.

3.3. Canonical URLs

To help search engines to understand the duplicate content of your pages you can suggest the preferred version of the URL for each page, using the new canonical URL tag, so you should install the Canonical URL's for Magento module.

3.4. XML Sitemaps

XML Sitemaps are an easy way of letting search engines know where your content is, it won't help you rank, but it might help you get indexed faster. You can create an XML sitemap manually by going to Catalog => Google Sitemap => Add Sitemap, choosing a filename, path and store view, and then pressing "Save & Generate".
You can then simply put the following code in your robots.txt file to point the search engines to your sitemap.xml file:
Sitemap: http://domain.com/sitemap.xml
As your inventory changes, you'll have to re-generate XML sitemaps. To make sure they're up to date, the best way is to set up a cron job, the process of which is extensively describe here.

Conclusion: Magento SEO development

This article has covered all the aspects of Magento SEO, if you have any feedback, or additions, let us know, so we can keep improving on this article. We're working closely with the Magento core development team to improve the SEO aspects of Magento, so we're actively trying to get some of the ideas in this article into Magento core.

Original Article by Joost de Valk

Lightning Seeds - Touch And Go (Nothing's changed but nothing seems the same)





A smile to break the ice, can I know your name

If you say no and it's touch and go

nothing would have been the same

Oh was it star-dust or just lust

Well one touch is just not enough

Faces change but somewhere in the passing crowd

The face you just can't live without

Nothing's changed but nothing seems the same

Remembering a thousand things I meant to say

The past's a sea of boys and girls

Who disappeared without a word

All friends of mine who had their time, then drifted away

Delights, dizzy heights, will you change your name?

If you say no and it's touch and go

Nothing's going to seem the same

Oh but the stars are out, they're shining down

They whisper through the passing clouds

Don't be scared the wind will come, blow you home

Don't be afraid, 'cos you're not alone

Nothing's changed but nothing seems the same

Remembering a thousand things I meant to say

The past's a sea of boys and girls

Who disappeared without a word

All friends of mine who had their time, then drifted away

Tell it to me one more time

Oh 'cos I'll never get to feel the same way, tonight

I'll remember every line, [every line]

I'll never get to hear the same words, not twice

Nothing's changed but nothing seems the same

Remembering a thousand things I meant to say

The past's a sea of boys and girls

Who disappeared without a word

All friends of mine who had their time, then drifted away

Remember all the Mersey skies

On rainy streets, I spent the night

With friends of mine who had their time then drifted away


Lightning Seeds Touch And Go 


Lightning Seeds

Lightning SeedsThe Lightning Seeds are an English alternative rock and pop band from Liverpool, England formed in 1989 by Ian Broudie, formerly of the Big in Japan band.Wikipedia

Active From: 1989

Origin: Liverpool, England

Record label: Ghetto, Virgin, Epic


Song
Year

Pure
1989
You Showed Me
1996
The Life of Riley
1990
Lucky You
1994
Three Lions '98

Ready or Not
1996
Marvellous
1994
Sense
1990
Change
1994
Sugar Coated Iceberg
1996
All I Want
1989
Touch and Go

Football Is Coming Home

Like You Do

Perfect


10 Must Have Free Apps for Freelancers


When you want to be an industry leader, you must have to be smart & organized. Nowadays most of the freelancers uses many applications for organized their work. here's 10 Must Have Free Apps for Freelancers, who want to be a successful freelancer.

1. CamStudio
This app can instantly record a screen and audio activity on your computer and create industry-standard AVI video files and using its built-in SWF Producer can turn those AVIs into lean, mean, bandwidth-friendly Streaming Flash videos (SWFs) . CamStudiocomes in handy if you create tutorials, troubleshoot issues that are too complicated to explain via email, or want to show off your work to a potential client.

2. Project HQ
Project HQ is a great-looking project management tool can be called Basecamp alternative. Not only does it help you manage multiple projects, include the ability to track projects, milestones, and create task lists and tasks.

3. Flow
Flow is a web-based task management app. With Flow, everything relating to your project (files, deadlines, tasks, and discussions) are centralized for everyone in one place.

4. Google Doc
Google Docs is one of the best tools available when you need to share documents or collaborate with fellow freelancers. You can easily create and edit a variety of file types like documents, spreadsheets, forms, and presentations. A recent update added some helpful new features, including a more streamlined sharing process.

5. Picnik (Free)
Need to re-size an image or crop a photo? Picnik should do the trick. It is a very simple-but-useful editing tool that you can use to crop, re-size and automatically enhance your images and photos. It also has an extension for Google’s Chrome browser.

6. Ge.tt
With 2GB of free space and the ability to share files while uploading, Ge.tt is a file-sharing app that makes uploading and distributing files of any size pretty seamless. You can also reuse links and get statistics for your documents – keeping track of how many times your file has been downloaded, for example.

7. Wave (Free)
If numbers are not your specialty, life just got a bit easier. Wave is a free accounting application that looks after your bookkeeping so you can focus on your work. For example, it can automatically categorize your expenses, and it learns as you use the program. You can also access Wave through its Google Chrome app.

8. Feedly
Feedly is a mulch-platform, a cross-browser web app that syncs with Google Reader and displays your RSS feeds beautifully – it practically invites you to sit down and read through every article. Beyond its attractive interface, Feedly lets you share articles via social networks and save content to read later.

9. Simplenote
If you are looking for a note-taking application that is not too heavy on features, Simplenote is one of the best. Using your free account, you can create notes on your computer or phone and organize them using tags or pins – which help keep specific notes at the top of the pile. You can also review older versions of notes and sync your account with third-party applications.

10. Toggl
Toggl is a time-tracking tool Nice interface, simple to use, and there’s both a web version and now a downloadable version (Windows only). And it’s free.

Read More About 25 Content Creation and Marketing Tools.

About Backlinks!

While Google is so busy with the panda updates, Google+ Local and Penguin Update, the search results aren't what they used to be. For the first time ever, Google’s going to penalize website having Bad Links & OVER Optimization Issue. But Still Back links is Important for website Google organic Ranking, and this article is for How important are back links?

When setting up your website for SEO (Search Engine Optimization) on Google there are several factors you need to look at in order to obtain a high organic rank on Google. Of course, you’re content and Meta tags must represent your data with positive density percentages. Google then takes your website and performs a mathematics equation and places a numeric value on your website depending on one of the most important features, and back links.


A back link and reciprocal link are identical. They both say the same thing to the Google engine, that your site should be ranked higher in the order because other people find value in what your website has to offer, thus they provide a link to your site. But after Google penguin update, reciprocal links can be considerate as violation of Google's webmaster guidelines and can negatively impact your site's ranking in search results. As Per Google Webmaster Guideline You shouldn’t engage with

• Links intended to manipulate PageRank
• Links to web spammers or bad neighborhoods on the web
• Excessive reciprocal links or excessive link exchanging ("Link to me and I'll link to you.")
• Buying or selling links that pass PageRank

When Google Bot crawls any website & found backlinks to your website, and also found an outgoing link from your website to that website may determine Google’s that you engage with link exchange schemes.

So, if you did link-exchange in past, we advise you remove all links that’s represent the reciprocal link.