Domain authority: Domain authority, or DA, is a third-party metric created by Moz to approximate how high your website will rank in SERPs. DA ranges on a scale of 0-100. Numbers closer to 100 indicate that the website is likely to rank well in SERPs, and lower numbers indicate that the website is not likely to rank well. DA is not a ranking factor in Google and has no effect on SERPs — it’s simply for SEOs, webmasters, marketers, and others to get a sense for how powerful the website likely is in a search engine’s eyes.
Although DA is specific to Moz, other SEO-focused tools have their own, corollary metrics. Examples can be found below:
- Authority (from SEMRush)
- Domain (Ahrefs)
- TrustFlow (Majestic)
Deindexation: Deindexation occurs when a URL is taken out of a search engine’s index. The index is essentially a search engine’s address book of the internet. You can think of it as a list of all of the pages that a search engine bot has found as it’s crawled all over the web. When a page is de-indexed, its URL is taken out of that “address book.”
In order to de-index a page in Google, you can submit a URL removal request in Google Search Console. Wait for a good few days or weeks for de-indexation to happen. The rate at which content is removed from Google’s index depends on how much bandwidth Google has and how important your website is to Google. You can add a robots no-index meta tag within the <head> tag of a single page, as follows:
- <meta name=”robots” content=”noindex”>
Disavow: The disavow tool within Google Search Console allows you to tell Google not to consider a set of low-quality backlinks (also known as toxic backlinks) to your site. If a website with a high spam score is linking to your website, Google may penalize your site for that backlink thinking you’re attempting to add building a backlink profile via spammy means. In order to remove that penalty, you must tell Google not to take those low-quality backlinks into account to prove that your backlinking strategies are well-intentioned. You can find step-by-step instructions on using the disavow tool here.
DNS: DNS stands for Domain Name System. DNS is the address book of the internet, mapping IP addresses to the domains that people type in. It’s what allows us to type in www.hubspot.com when we want to go to HubSpot’s website rather than the string of numbers (the IP address) that represents the location of the content that lives on the website. The system itself is comprised of multiple levels of servers that communicate with each other to locate the information that an internet user is requesting when that user types in a web address.
Domain name registrar: A domain registrar is an accredited company that has the right to sell domain names. If you want to buy a domain name like MyCompanyName.com, you’d have to go directly to a domain name registrar or to a reseller who’s under contract with a registrar in order to buy that name and therefore have the rights to publish content at MyCompanyName.com. Examples include: Domain.com, GoDaddy, Bluehost, Register.com, and HostGator.
Doorway page: A page created with the intent of ranking highly for certain search queries that subsequently sends users to a different site. Using doorway pages is considered a black hat SEO technique because it can lead to multiple pages that ultimately point users to the same final destination all ranking in the same SERP. That means that there are fewer unique results for the user to select from in the SERP. Doorway pages range in shadiness — on the worst end of the spectrum, they might be keyword-stuffed, machine-generated content. On the less concerning end, they might be separate webpages targeted at different regions that eventually bring users to the same content. Doorway pages often contain hidden text and sometimes automatically redirect users to one ultimate page that the site’s owner wants the users to see.
Duplicate content: When the same content (a substantive portion of text or other page components) appears on two URLs, Google sees it as duplicate content. The content is considered duplicate whether it appears on the same domain or separate domains.
There are common non-malicious practices that result in duplicate content — for example, discussion forums that generate a different, more pared-down version of the forum for mobile devices may end up generating two separate URLs.
Duplicate content can hurt search rankings because it causes a number of problems for search engines:
- The search engine isn’t sure which URL to rank in results for a specific query because it can’t tell which page is better.
- The search engine doesn’t know how to split the value of the content (as judged by keywords, anchor text, link equity, trust, and more) between the URLs.
Typically, search engines deal with duplicate content by selecting one of the two pages. In cases where you’re aware that you have duplicate content, you can use canonicalized tags to tell Google which one to rank.
E-A-T (Expertise-Authoritativeness-Trustworthiness): Google’s acronym for what a high-quality page needs. Google has confirmed that it’s a ranking factor; a page with high E-A-T as measured by Google’s artificial intelligence will rank higher than a page with low E-A-T.
- Expertise is demonstrated with high-quality main content that’s developed by someone who is actually an expert on that topic — for example, medical advice should be created by a doctor or a credentialed medical body.
- Authoritativeness is achieved when the creator of the information is uniquely positioned to provide information about the topic. For example, the National Parks Service would have the most reliable information about entry fees to its parks.
- Trustworthiness measures the safety of the browsing experience. A more trustworthy site is one that is secure and doesn’t expose its users information. The contact information of the webmaster is visible.
You can read Google’s in-depth analysis of E-A-T in its Search Quality Evaluator Guidelines.
*Note: Some sources online inaccurately represent this concept as Expertise-Authority-Trust, but Google’s guidelines confirm that the acronym actually stands for Expertise-Authoritativeness-Trustworthiness.
External links: A link that points to a webpage that exists outside of the domain where the link lives. If your site links out to a source for a certain piece of information, that’s an external link. If another site links out to your site, that’s also considered an external link on their site.
In HTML, this looks like: <a href=”http://www.otherwebsite.com/”>Link Anchor Text</a>
Fetch and Render: A tool by Google that allows you to enter a URL and see how Google itself views that page. This enables you to check that the content that you want to block from Google is blocked successfully. In the process, Google will also check all of the links on the page. Once the fetch and render is complete, you’ll see a side-by-side comparison of what a user sees and what Google sees. You can then pinpoint any errors that you may need to correct.
‘follow’ links: When you anchor link text on your site, you pass authority onto the page to which you link unless you specify otherwise (see: ‘nofollow’ link). By creating an anchor text without a ‘nofollow’ attribute, you’re telling search engines that they can pass on authority to the page you’ve linked, thus helping the page you’ve linked out to rank higher.
*Note: Some SEO resources online refer to ‘follow’ links as ‘dofollow’ links, but ‘dofollow’ does not actually exist as an HTML attribute. You do not need to add an HTML attribute to tell Google to follow a link.
Header tags: Header tags are denoted by <h1>, <h2>, <h3>, etc. in the head HTML of a page. They tell the site visitor and search engines what each page is about. <h1> denotes the title of the page, whereas <h2> is typically a subheader. The header tags should include the keywords you’re hoping to rank for.
In HTML, if you wanted to name an article ‘Your Title Here’, your line of code would read as follows:
Hreflang: The hreflang HTML attribute tells Google the language of your content so that it can serve your content to the users who are searching in the language that your page is in. For example, a Spanish hreflang tag will tell Google to serve that Spanish version of the page, rather than the English version of the page, to users whose search engine language is set to Spanish. The hreflang tag denotes that the page is an alternative to another page that’s the same content — just in a different language. Hreflang is often used to account for currency, shipping, or cultural differences.
Google uses ISO 639-1 tags to represent languages in code.
An hreflang tag can be implemented in the on-page markup, the HTTP header, or the sitemap as follows (Spanish is shown in the following example using the “es-es” annotation):
<link rel=”alternate” href=”http://example.com” hreflang=”es-es” />
HTML sitemap: An outline of a website’s navigation that’s written for humans. Most HTML sitemaps are linked in the bottom navigation of a website. They enable site visitors to find pages they’re looking for. It’s similar to an index at the end of a book, which is intended to help its readers locate a specific section among a huge amount of content.
Here’s an example of an HTML sitemap:
Image Source: Statcounter
*Note: Most of the time someone references a sitemap, they’re talking about the XML sitemap, which is also an outline of a site’s navigation but is written for search engines rather than humans.
Hummingbird: An update to Google’s algorithm in Sept. 2013 that overhauled the way that Google analyzed a search query. Instead of solely focusing on finding matches for words in the search query, Google started to look for the meaning behind the query as a whole by using natural language processing. It also improved the mechanics of the Knowledge Graph, which meant that Google was more effectively able to answer question queries right in the SERP (rather than forcing users to click through on one of the search results). (Source: WordStream)
Image compression: Image compression is a process that reduces the file size of an image, meaning it will take up less storage space and load more quickly. By compressing images, you can reduce load time, create a better user experience, and optimize for search engines’ crawlers.
There are two main types, lossy and lossless. Lossless compression preserves the photo quality — when someone opens the image, it will look the same as it did when someone sent that photo via email or put it on their website. Lossy compression actually discards parts of the photo, so the image quality is reduced upon opening the compressed file. However, lossy-compressed photos take up less storage space than lossless ones. (Source: KeyCDN)
Image sitemap: An image sitemap is an XML file that provides metadata about the photos on your site. It gives search engines more context about the images, helping search engines discover more than they might have otherwise. The metadata could include the title of the photo, the source, the location, or other pieces of information. You can see an example of an image sitemap here.
Internal links: An anchor link on a site that links to another page on the same domain. For example, a link on www.hubspot.com that points to www.hubspot.com/pricing/ is considered an internal link because both pages are on the domain hubspot.com.
Image Source: JSON-LD.org
Keyword density: A measure of the number of times a keyword is repeated on one webpage. It’s calculated by dividing the number of times a keyword appears on a web page by the total number of words on the page.
Previously, high keyword density was thought to be good for SEO because it would give search engines clarity on the site’s content. However, people were “keyword stuffing,” or repeating a keyword so many times and so unnaturally that it created a poor user experience, so Google stopped using keyword density as a ranking factor. Now, search engines are more likely to consider a website that repeats or stuffs keywords spammy and penalize it in SERPs.
Lazy loading: When a page is coded in such a way that the page components load only when the user needs them, rather than all at once upon the first page load. For example, the images after the first one in an image carousel may load only when the user starts to flip through the carousel.
The effect of lazy loading on SEO depends on its use. If a large piece of text containing helpful information (e.g. a Q&A section or a blog post) is lazy loaded, web crawlers may completely skip over that piece of text because it hasn’t loaded in yet — which would mean that the search engine could miss out on target keywords and understanding the structure of the page. However, if data-heavy images are lazy-loaded in a thoughtful way, it can improve page load time and create a better user experience by not overwhelming the user as soon as they load the page. (Source: StackPath)
Link building/link acquisition: Link building is an SEO strategy that involves placing hyperlinks back to your site on other websites by reaching out to the webmasters of those other sites. A high number of high-quality backlinks from relevant and authoritative websites can significantly improve your website’s search rankings.
Link buying: A black hat SEO technique that involves paying money to another business or individual to place links to your site on their site in an attempt to generate more backlinks. Link buying has long violated Google’s Terms of Service.
Link equity: The authority that’s passed from Page A to Page B when Page A hyperlinks out to Page B. The amount of authority, casually referred to as “link juice,” depends on how authoritative Page A is, how closely related the content on the pages are, and other directions given in the hyperlink (e.g. a ‘nofollow’ attribute signals to web crawlers that they should ignore them). (Source: Moz)
Link reclamation: The process of systematically reaching out to other websites and fixing your own site when you find dead or broken links to your site in an effort to preserve the link equity flowing through the links to your pages. This process often needs to happen after URLs are changed in bulk or a website undergoes a redesign and content is changed or removed. There are tools that can help you identify broken links to your website pages. Cleaning up these dead links improves both the user experience and the ability of bots to crawl your hyperlinks. (Source: SearchEngineJournal)
Local Business schema/local business listing: When users Google businesses in their vicinity, Google often returns a Knowledge Graph card that allows users to see images of it, the address, the rating, the phone number, and more. To tell Google what to put in those information slots, businesses can follow Local Business listing markup rules, or schema markup. This typically involves using JSON-LD formatting in your webpage’s HTML to tag information to make it easier for Google to find key facts.
Here is an example of what a local business listing looks like in a mobile result:
(Image Source: Google)
Meta description: A description of your page that you denote HTML tag in the header HTML of your page that tells search engines and searchers what your page contains. A search engine-optimized meta description contains your target keyword and communicates the value that the page will give to the people who read it. Google cuts off meta descriptions at 160 characters. Although meta descriptions are not a ranking factor according to Google, they’re crucial to convincing searchers that your page is worth clicking through to.
The code might look like this:
<meta name=”description” content=”This is where you write your meta description.”>
A meta description often appears just below the title tag and the URL of your page in search engine results. The following example shows where it appears on a SERP:
Meta refresh tag: A meta refresh is a command that webmasters can code into HTML to tell a browser to refresh a page after a certain number of seconds. It uses the <meta> tag, and ‘refresh’ is the value that the webmaster puts in for the http-equiv attribute. Google indexes the page that exists upon refresh rather than the page that initially loads to avoid spammers who try to trick users by taking them to the page they clicked on and then quickly redirecting them somewhere else.
The code would look like this for someone who wanted their page to refresh after 20 seconds:
- <meta http-equiv=”refresh” content=”20″/>
It’s also possible to implement a redirect by using a meta refresh tag with the time set to 0 seconds and an additional URL attribute set to the destination URL.
The code would look like this for someone who wanted their page to refresh after 20 seconds:
- <meta http-equiv=”refresh” content=”0;url=[destination URL goes here]”/>
However, Google recommends using a 301 redirect over a meta refresh tag to institute a redirect to ensure that the search engine and users don’t have a misleading or confusing experience by being suddenly redirected — and thus perceive the site as spam, leading to lower rankings.
Minification: A process done after coding a page but before launching that page that involves stripping all of the unnecessary data out of a page to ensure that the page loads as quickly as possible. Examples of minification techniques include reducing whitespace on a page and shortening variable and function names. The net effect is reduced page load time, so using minification can improve SEO by improving PageSpeed. (Source: Stackpath)
Mobile viewport tag: The viewport refers to the area of a page that a user on a specific device can see. The mobile viewport tag is the HTML instruction given in the <meta> tag of a page’s header that tells the browser how to deal with the page’s dimensions since usually, the width of a page on a desktop computer is greater than the width on a mobile device. It’s important to include the viewport tag to demonstrate to search engines that your page is mobile-friendly, which helps you rank higher.
Someone who wanted to tell the browser to adjust the page to the width of the device rather than keeping it at its maximum width across all devices would use the following code:
- <meta name=”viewport” content=”width=device-width, initial-scale=1″>
The width=device-width section tells the browser to take the width of the device into account when rendering the page; the initial-scale=1 portion tells the browser the initial zoom level it should use when it first loads the page. (Source: W3Schools)
Here are two images of the same page, one that includes the viewport meta tag and one that does not:
(Image Source: W3Schools)
Mobile-first indexing: Google’s practice of crawling, indexing, and ranking the mobile version of a page before the desktop version of the page. This means that Google prefers responsive web design, looks to the mobile URL while indexing if there are separate URLs for different device types, and preferences pages that are created in AMP HTML, among other best practices. As of late 2018, Google crawled over half of pages in search results according to this procedure, and as of May 2019, all pages that Google is discovering for the first time are evaluated using mobile-first indexing.