TVs. Consoles. Projectors and accessories. Technologies. Digital TV

What frequency to use to compile a semantic core. How to create a semantic core for an online store: step-by-step instructions. Types of short circuits by search type

Useful materials from the blog on collecting keys for semantics, query clustering and site page optimization.

Article topics:


Semantic core



A correctly composed semantic core can send only the necessary users to your site, and an unsuccessful one can be buried in the depths of search results.

Working with queries included in the semantic core (SC) consists of collection, cleaning and clustering. Having received the grouping results, you need to determine the optimal place for each of them: on a resource page, as part of the content of your website or a third-party site.


How to collect keys for SY


Briefly about the important things: which operators to use in Wordstat to view the necessary queries, and how to make your work in the service easier.

Wordstat does not provide absolutely accurate information; it does not contain all queries; the data may be distorted because not all consumers use Yandex. However, from this data it is possible to draw conclusions about the popularity of a topic or product, roughly predict demand, collect keys and find ideas for new content.

You can search for data by simply entering a query into the search service, but to specify queries there are operators - additional symbols with clarifications. They work on the word and region search tabs; on the query history tab, you can only use the “+query” operator.

In the article:

  • Why do you need Wordstat?
  • Working with Wordstat operators
  • How to Read Wordstat Data
  • Extensions for Yandex Wordstat

We strive to become leaders in the search results: how analysis of articles from the top will help in working on content, what criteria to use for analysis and how to do it faster and more efficiently.

It is difficult to track the results of blogging and publishing other texts on the site without detailed analytics. How can you understand why your competitors’ articles are in the top, but yours are not, even though you write better and more talented?

In the article:

  • What is usually recommended?
  • How to analyze
  • Disadvantages of the approach
  • Benefits of Content Analysis
  • Tools

How to write optimized texts


Which content gets more links and social signals? Backlinko, in partnership with BuzzSumo, analyzed 912 million blog posts, looking at article length, headline format, social signals and article backlinks, and came up with recommendations for content marketing. We translated and adapted the study.

In the article:

  • Brief conclusions from the content study
  • New knowledge about content marketing, in detail:
  1. Which Content Gets More Links?
  2. Which texts are more popular on social networks?
  3. Backlinks are hard to get
  4. What materials are getting all the reposts?
  5. How is the number of backlinks and reposts related?
  6. Which headlines bring more shares?
  7. What day of the week is best to publish content?
  8. What content format is reposted most often?
  9. Which Content Format Gets More Links?
  10. How it generates links and reposts for B2B and B2C content

At the moment, for search engine promotion the maximum important role factors such as content and structure play a role. However, how to understand what to write text about, what sections and pages to create on the site? In addition to this, you need to find out exactly what the target visitor to your resource is interested in. To answer all these questions you need to collect a semantic core.

Semantic core— a list of words or phrases that fully reflect the theme of your site.

In the article I will tell you how to pick it up, clean it and break it down into structure. The result will be a complete structure with queries clustered across pages.

Here is an example of a query core broken down into a structure:


By clustering I mean breaking your search queries down into separate pages. This method will be relevant for both promotion in Yandex and Google PS. In the article I will describe completely free way creation semantic core, however, I will also show options with various paid services.

After reading the article, you will learn

  • Choose correct queries to suit your theme
  • Collect the most complete core of phrases
  • Clean up uninteresting requests
  • Group and create structure

Having collected the semantic core you can

  • Create a meaningful structure on the site
  • Create a multi-level menu
  • Fill pages with texts and write meta descriptions and titles on them
  • Collect your website's positions for queries from search engines

Collection and clustering of the semantic core

Correct compilation for Google and Yandex begins with identifying the main key phrases of your topic. As an example, I will demonstrate its composition using a fictitious online clothing store. There are three ways to collect the semantic core:

  1. Manual. Using the Yandex Wordstat service, you enter your keywords and manually select the phrases you need. This method is quite fast if you need to collect keys on one page, however, there are two disadvantages.
    • The accuracy of the method is poor. You may always miss some important words if you use this method.
    • You will not be able to assemble a semantic core for a large online store, although you can use the Yandex Wordstat Assistant plugin to simplify it - this will not solve the problem.
  2. Semi-automatic. In this method, I assume using a program to collect the kernel and further manually breaking it down into sections, subsections, pages, etc. This method compilation and clustering of the semantic core, in my opinion, is the most effective because has a number of advantages:
    • Maximum coverage of all topics.
    • Qualitative breakdown
  3. Auto. Nowadays, there are several services that offer fully automatic kernel collection or clustering of your requests. Fully automatic option - I do not recommend using it, because... The quality of collection and clustering of the semantic core is currently quite low. Automatic query clustering is gaining popularity and has its place, but you still need to merge some pages manually, because the system does not give an ideal ready-made solution. And in my opinion, you will simply get confused and will not be able to immerse yourself in the project.

To compile and cluster a full-fledged correct semantic core for any project, in 90% of cases I use a semi-automatic method.

So, in order we need to follow these steps:

  1. Selection of queries for topics
  2. Collecting the kernel based on requests
  3. Cleaning up non-target requests
  4. Clustering (breaking phrases into structure)

I showed an example of selecting a semantic core and grouping into a structure above. Let me remind you that we have an online clothing store, so let’s start looking at point 1.

1. Selection of phrases for your topic

At this stage we will need the Yandex Wordstat tool, your competitors and logic. In this step, it is important to collect a list of phrases that are thematic high-frequency queries.

How to select queries to collect semantics from Yandex Wordstat

Go to the service, select the city(s)/region(s) you need, enter the most “fat” queries in your opinion and look at the right column. There you will find the thematic words you need, both for other sections, and frequency synonyms for the entered phrase.

How to select queries before compiling a semantic core using competitors

Enter the most popular queries into the search engine and select one of the most popular sites, many of which you most likely already know.

Pay attention to the main sections and save the phrases you need.

At this stage, it is important to do the right thing: to cover as much as possible all possible words from your topic and not miss anything, then your semantic core will be as complete as possible.

Applicable to our example, we need to create a list of the following phrases/keywords:

  • Cloth
  • Shoes
  • Boots
  • Dresses
  • T-shirts
  • Underwear
  • Shorts

What phrases are pointless to enter?: women's clothing, buy shoes, a prom dress, etc. Why?— These phrases are the “tails” of the queries “clothes”, “shoes”, “dresses” and will be added to the semantic core automatically at the 2nd stage of collection. Those. you can add them, but it will be pointless double work.

What keys do I need to enter?“low boots”, “boots” are not the same thing as “boots”. It is the word form that is important, and not whether these words have the same root or not.

For some, the list of key phrases will be long, but for others it consists of one word - don’t be alarmed. For example, for an online store of doors, the word “door” may well be enough to compile a semantic core.

And so, at the end of this step we should have a list like this.

2. Collecting queries for the semantic core

For proper, full collection, we need a program. I will show an example using two programs simultaneously:

  • On the paid version - KeyCollector. For those who have it or want to buy it.
  • Free - Slovoeb. Free program for those who are not ready to spend money.

Open the program

We create new project and let's call it, for example, Mysite

Now to further collect the semantic core, we need to do several things:

Create new account on Yandex mail (the old one is not recommended due to the fact that it can be banned for many requests). So you created an account, for example [email protected] with password super2018. Now you need to specify this account in the settings as ivan.ivanov:super2018 and click the “save changes” button below. More details in the screenshots.

We select a region to compile a semantic core. You need to select only those regions in which you are going to promote and click save. The frequency of requests and whether they will be included in the collection in principle will depend on this.

All settings are completed, all that remains is to add our list of key phrases prepared in the first step and click the “start collecting” button of the semantic core.

The process is completely automatic and quite long. You can make coffee for now, but if the topic is broad, for example, like the one we are collecting, then this will last for several hours 😉

Once all the phrases are collected, you will see something like this:

And this stage is over - let's move on to the next step.

3. Cleaning the semantic core

First, we need to remove requests that are not interesting to us (non-target):

  • Related to another brand, for example, Gloria Jeans, Ekko
  • Information queries, for example, “I wear boots”, “jeans size”
  • Similar in topic, but not related to your business, for example, “used clothing”, “clothing wholesale”
  • Queries that are in no way related to the topic, for example, “Sims dresses”, “puss in boots” (there are quite a lot of such queries after selecting the semantic core)
  • Requests from other regions, metro, districts, streets (it doesn’t matter which region you collected requests for - another region still comes across)

Cleaning must be done manually as follows:

We enter a word, press “Enter”, if in our created semantic core it finds exactly the phrases that we need, select what we found and press delete.

I recommend entering the word not as a whole, but using a construction without prepositions and endings, i.e. if we write the word “glory”, it will find the phrases “buy jeans at Gloria” and “buy jeans at Gloria”. If you spelled "gloria" - "gloria" would not be found.

Thus, you need to go through all the points and remove unnecessary queries from the semantic core. This may take a significant amount of time, and you may end up deleting most of the collected queries, but the result will be a complete, clean and correct list of all possible promoted queries for your site.

Now upload all your queries to excel

You can also remove non-target queries from semantics en masse, provided you have a list. This can be done using stop words, and this is easy to do for a typical group of words with cities, subways, and streets. You can download a list of words that I use at the bottom of the page.

4. Clustering of the semantic core

This is the most important and interesting part - we need to divide our requests into pages and sections, which together will create the structure of your site. A little theory - what to follow when separating requests:

  • Competitors. You can pay attention to how the semantic core of your competitors from the TOP is clustered and do the same, at least with the main sections. And also see which pages are in the search results for low-frequency queries. For example, if you are not sure whether or not to create a separate section for the query “red leather skirts,” then enter the phrase into the search engine and look at the results. If the search results contain resources with such sections, then it makes sense to create a separate page.
  • Logics. Do the entire grouping of the semantic core using logic: the structure should be clear and represent in your head a structured tree of pages with categories and subcategories.

And a couple more tips:

  • It is not recommended to place less than 3 requests per page.
  • Don’t make too many levels of nesting, try to have 3-4 of them (site.ru/category/subcategory/sub-subcategory)
  • Do not make long URLs, if you have many levels of nesting when clustering the semantic core, try to shorten the urls of categories high in the hierarchy, i.e. instead of “your-site.ru/zhenskaya-odezhda/palto-dlya-zhenshin/krasnoe-palto” do “your-site.ru/zhenshinam/palto/krasnoe”

Now to practice

Kernel clustering as an example

First, let's break down all requests into main categories. Looking at the logic of competitors, the main categories for a clothing store will be: men's clothing, women's clothing, children's clothing, as well as a bunch of other categories that are not tied to gender/age, such as simply “shoes”, “outerwear”.

We group the semantic core using Excel. Open our file and act:

  1. We break it down into main sections
  2. Take one section and break it into subsections

I will show you the example of one section - men's clothing and its subsection. In order to separate some keys from others, you need to select the entire sheet and click conditional formatting->cell selection rules->text contains

Now in the window that opens, write “husband” and press enter.

Now all our keys for men's clothing are highlighted. It is enough to use a filter to separate the selected keys from the rest of our collected semantic core.

So let’s turn on the filter: you need to select the column with queries and click sort and filter->filter

And now let's sort

Create a separate sheet. Cut the highlighted lines and paste them there. You will need to split the kernel in the future using this method.

Change the name of this sheet to “Men’s clothing”, a sheet where the rest of the semantic core is called “All queries”. Then create another sheet, call it “Structure” and put it as the very first one. On the structure page, create a tree. You should get something like this:

Now we need to divide the large men's clothing section into subsections and sub-subsections.

For ease of use and navigation through your clustered semantic core, provide links from the structure to the appropriate sheets. To do this, right-click on the desired item in the structure and do as in the screenshot.

And now you need to methodically separate the requests manually, simultaneously deleting what you may not have been able to notice and delete at the kernel cleaning stage. Ultimately, thanks to clustering of the semantic core, you should end up with a structure similar to this:

So. What we learned to do:

  • Select the queries we need to collect the semantic core
  • Collect all possible phrases for these queries
  • Clean out "garbage"
  • Cluster and create structure

What, thanks to the creation of such a clustered semantic core, can you do next:

  • Create a structure on the site
  • Create a menu
  • Write texts, meta descriptions, titles
  • Collect positions to track dynamics of requests

Now a little about programs and services

Programs for collecting the semantic core

Here I will describe not only programs, but also plugins and online services which I use

  • Yandex Wordstat Assistant is a plugin that makes it convenient to select queries from Wordstat. Great for quickly compiling the core of a small site or 1 page.
  • Keycollector (word - free version) is a full-fledged program for clustering and creating a semantic core. It is very popular. A huge amount of functionality in addition to the main direction: Selection of keys from a bunch of other systems, the possibility of auto-clustering, collecting positions in Yandex and Google and much more.
  • Just-magic is a multifunctional online service for compiling a kernel, auto-breaking, checking the quality of texts and other functions. The service is shareware; to fully operate, you need to pay a subscription fee.

Thank you for reading the article. Thanks to this step by step manual you will be able to compose the semantic core of your site for promotion in Yandex and Google. If you have any questions, ask in the comments. Below are the bonuses.

Mikhail (Kashchey)

18.11.2015

Semantic core of a website: what is it? Collection of semantic core and analysis of key queries

What is the semantic core (SC)? Before I give the answers, let's understand some related concepts. This is necessary so that we speak the same language. So:

Key query (KZ) is a phrase that is typed into the search bar of Yandex, Google, etc.

Frequency of requests. There are low-frequency, mid-frequency and high-frequency categories of requests (LF, MF, HF)

Target audience (TA). Those who are interested in your services, products or information.

What is SY? KN is a collection of key queries of all categories for which your target audience will come to your website. Something like that. The second issue that needs to be considered before moving on to compiling a semantic core, or more precisely to a story about how to compose a semantic core, is the frequency of key queries. What is this and how to divide requests by frequency?

It is not difficult to divide requests by frequency. If the key is driven in more than 1000 times a month, then it is definitely high frequency. If 100-1000, then this is midrange. Anything less than 100 is low frequency.

Attention! In some narrow topics these numbers do not work. That is, you need to find the highest frequency request - this will be HF. Mid-frequency requests will be between LF and HF. What services help you find out how many people enter this key phrase every month? Look for the answer in the article: (in this article you will also find information about SEO services that help you select KZ)

Now that I have tried to explain what is what, let’s get down to the main thing: collecting the semantic core.

Drawing up a semantic core for the site

Compiling a semantic core is not as easy as it seems. You need to take everything into account possible options LF and MF requests. To create an SEO core, it is better to use special services. Information about them can be found at the link above.

How to select requests? Let's imagine that you are creating a website for cat lovers. How would you look for information about cats? What should you write in the search? The first thing that comes to mind. For example:

Cat (HF+) (cats are not a separate request)

Siamese cat (HF)

Koshaki (SC)

What Do Domestic Cats Eat?

I checked the frequency on the wordstat.yandex.ru service. Like this:

Pay attention to the quotation marks. They are needed to find out how many people entered the query in direct entry. When composing semantics, you need to focus on direct queries and “tails”. ABOUT "tails" you can read it.

Hope this is clear.

Find all possible topical keys – this is a dreary, painstaking work that takes a lot of time. However, a lot depends on the quality of the assembly of the semantic core of the site - the success of further SEO optimization of the resource.

What is the most important thing when compiling a SY?

The most important thing when compiling a semantic core is to correctly structure all key queries. This is necessary to use the compiled semantic core as efficiently as possible. And there's nothing better than a table.

Here is a good example of a table. By the way... it is better to create a table in Excel.

So what do we see? We see a competent structure, it is easy to work with. You can add your own columns to the table to make your task easier.

Your task is to find as many low-competition queries as possible and promote your site for these queries. How to determine that a request is low-competitive? If the request is low, then in 80% of cases it has little competition. You can also check the level of competition in a search engine. Like this:

Result: 43 million responses. Competition for the cat theme will be low. For other topics you need to focus on other numbers. For example, KZ “copywriter” is a high-frequency key with 2 million answers, and it has high competition.

Articles with LF queries will automatically include HF queries - this is normal. It is best to write one article per KZ, select a picture for it and run it through groups in social networks+ promote it with articles with a link, but this is long and expensive. Therefore, an article includes 2-3 keys - this allows you to reduce the cost of articles.

The article didn't answer your question? So ask it in the comments!

P.S. I will be pleased.

Semantic coreis a set of keywords that search engine users enter into the search bar to find an answer to their query.

Collecting a semantic core is necessary in order to find all the keywords and phrases for which a company or website is ready to give a comprehensive answer, satisfy customer needs, and for which users are looking (formulating a request) for an answer to their question. If we have a keyword, then the user will get to our site, if not, he won’t.

The volume of keywords in the semantic core depends on the goals, objectives, and characteristics of the business. The reach of the target audience, its conversion and cost depend on the volume and depth of the semantic core. Full semantics allows you to increase coverage and reduce competition.

Goals of collecting the semantic core

Searching and selecting keywords is one of the stages of email marketing. And it greatly influences further success. Based on the compiled semantic core, the following will be developed:

  • Website:
    • “Ideal” structure of a website, online store, blog.There are 2 approaches to this issue: SEO ( search engine optimization) and PR (public relations). The SEO approach consists of the initial collection of all key queries. Covering maximum quantity niche keywords, we develop the site structure, taking into account the real requests of users and their needs. With the PR method, the site structure is first developed based on the information that we want to convey to users. Afterwards, keywords are collected and distributed throughout our structure. Which strategy to choose depends on the goals: if you need to convince of something, convey some position, etc., then the PR method is chosen. If you need to get as much traffic as possible, for example, if you are making an information website or online store, then you can choose the first method. But in general, this is the foundation for future promotion: a well-designed site structure allows you to conveniently sort information for users (positive user experience) and the ability to index it in search engines. The criteria for accepting the future structure of the site are the goals and expectations of users and the results of an analysis of successful competitors.
  • Lead generation strategy:
    • SEO strategy. Having identified the search queries with the least competition and the greatest potential traffic that they can bring, a content strategy is developed to fill and optimize the site.
    • contextual advertising. When conducting contextual advertising campaigns in Yandex Direct, Google Ads, etc. the maximum number of relevant keywords is collected for which we are able and ready to satisfy the demand.
    • map of information needs (content plan).Having grouped keywords according to the intents (intentions) of users, technical specifications are drawn up and given to copywriters for writing articles.

Study of the search process in search engines

Psychology of Internet Search

People don't think in words. Words are symbols through which we convey our thoughts. Everyone has their own mechanism for transforming thoughts into words, each person has their own way of formulating questions. Every query entered into the search bar search engine, a person accompanies certain thoughts and expectations.

By understanding how people search online, you can tailor your marketing efforts to their interests. Knowing how the search process works, we select appropriate keywords and optimize the site, setting up contextual advertising.

After the PS user clicks on the “Find” button, the search results that appear should meet his expectations. In other words, search results (search results and contextual advertising) should help solve the user's question. Therefore, the marketer’s task is to customize the ad and search snippet so that they are relevant to the search query.

  1. reflect the search query;
  2. Consider the stage of the buying cycle.

Those. those words that will be indicated in snippets and ads will lay the foundation for the user’s expectations from your site. Therefore, the landing page that he will be taken to by clicking on the link must meet his expectations. By meeting these expectations, we increase the likelihood of a positive outcome. Advertising should lead the user to a place where he will immediately receive an answer.

Search categories:

  1. directly formulated (metal lathe, dentist);
  2. description of the problem (how to sharpen the shaft, toothache);
  3. symptoms of the problem (the feed box of the lathe does not work, a tooth has crumbled);
  4. description of the incident (crunching sound during turning on a TV-16 lathe);
  5. name of the product, article, brand, manufacturer.

If you carefully study the keywords, you can get to the root of the problem: while turning on a lathe, a gear in the feed box broke, so we can offer to manufacture it or suggest a new machine. Since the person did not treat the diseased tooth and it crumbled due to caries, we, as dentistry, will offer to install an implant.

Classification and types of search queries

By search type:

  • informational – queries to find information, for example, “speed of light”, “how to make a fishing rod with your own hands”, “why the earth is round”, etc.;
  • navigational – queries by which users search for an organization, brand, person, etc. For example, “Coca-cola”, “restaurant “Pyatkin”, “Lev Tolstoy”;
  • transactional – queries entered by users with the intention of performing some targeted action. For example, "buy samsung phone Galaxy S6”, “download the online book “Web Analytics in Practice”;
  • fuzzy queries – all queries that cannot be unambiguously attributed to one of the types described above, i.e. clearly define what exactly the user is looking for. For example, “Maine Coon” - it is not clear what the user wants: to find out what kind of cat breed it is or to look for where to buy it, or perhaps something else.

By geodependence:

  • geo-dependent – ​​requests that depend on the user’s location. For example, “grocery stores”, “tire service center”.
  • geo-independent - do not depend on a person’s location. For example, “recipe for cutlets”, “how to install an alarm”.

By naturalness:

  • natural – queries entered by users in natural human language: “prices for Samsung laptops”, “characteristics of lever scissors”;
  • telegraphic – queries entered in “telegraphic language”: “Samsung laptop prices”, “lever scissors specifications”.

By seasonality:

  • seasonal – time-sensitive keywords. Such queries are “winter tires”, “New Year’s fireworks”, “ easter eggs"etc.
  • non-seasonal - they are not sensitive to time, they are popular at any time of the year. Examples of such queries are: “wristwatch”, “how to cook pizza”, “install Windows”.

By frequency:

  • HF – high frequency requests.
  • MF – mid-frequency requests.
  • LF – low frequency requests.
  • “Long tail” – microfrequency search queries, usually consisting of 4 or more words and having a frequency of 1-3 per month. The total volume of such requests adds up to tangible traffic with the least competition in the search results and practically without much effort in promotion.

It is impossible to say specifically that a certain number of queries correspond to a high-frequency query, and which number corresponds to a low-frequency one, since these values ​​vary greatly from niche to niche. Somewhere 1000 requests per month may correspond to a low-frequency request, while in another niche it will be a high-frequency one.

Keyword frequency values ​​are conditional and are intended for ranking by popularity.

By competitiveness:

  • VK – highly competitive queries.
  • SC – average competitive requests.
  • NK – low-competitive requests.

This classification allows you to create a list of priority key queries that will be used to conduct search engine promotion. In addition, reduce the cost per click in contextual advertising campaigns.

Common goals of the user, webmaster and search engine

In the process of searching for information through a search engine, 3 parties are involved: the search engine, the user and the web resource. And each side has its own goals: the user needs to find an answer to his query, and the search engine and web resource need to make money from this.

If webmasters begin to somehow manipulate the work of the search engine, without giving the required answers to users, then everyone loses: the user does not receive an answer to his request and goes to look in another search engine on another site.

Therefore, the needs of the users are primary, because Without them, neither the PS nor the web resource will work. First of all, by satisfying the interests of PS users, we contribute to overall earnings. The search engine will work on contextual advertising, web resource - on the sale of goods or services to users themselves or advertisers. Everyone wins. Link your goals to your users' goals. Then the probability of a positive outcome increases sharply.

Keyword Research

As we have already found out, keywords are thoughts expressed in verbal form. Our goal is to select keywords that reflect consumer thoughts and demand that we can satisfy. If we have a keyword, the user will see our message, if not, he will not see it.

Some keywords generate a lot of traffic, others little. Some give high conversions, others generate low-quality traffic.

Each keyword constitutes a separate submarket with its own clientele. Behind each key phrase lies some need, desire, question or suggestion that a person may not be aware of.

Having determined which stage of the purchasing cycle the keyword belongs to, we will understand when and why the user is looking for it, therefore, we will provide information that is relevant to him and meets his expectations.

Before you begin your research, ask yourself the following questions:

  1. What keywords should we use to reach our target audience?
  2. What key phrases do our interesting customer segments use when searching for our products?
  3. What is going on in the user's mind when writing this request?
  4. What buying cycle are they in using this key phrase?

Keyword Research Objectives

  1. Gain insight into the existing “ecosystem” and develop a strategy for natural and paid search.
  2. Identify the needs of potential clients and develop appropriate responses to them.

Anatomy of requests

Key phrases consist of 3 elements:

[body]+[qualifier]+[tail],

where the body (also called a “mask”) is the basis of the request, from which alone it is impossible to understand the users’ intentions; specifiers determine user intent and classify a request as transactional, informational, or navigational; the tail only details intentions or needs.

For example, buy a lathe, 6P12 milling machine specifications, buy a bimetallic metal band saw in Moscow time.

Knowledge of the anatomy of search queries allows you to collect all the masks when working out the semantics, as well as correctly distribute the collected keywords according to the purchasing cycle when developing a paid and natural search strategy.

Keyword Segmentation

When searching for masks and working through an already collected semantic core, it becomes necessary to segment keywords for more convenient subsequent work. Having segmented keys, we understand how people search, therefore, we expand them with additional key queries, assess the likelihood of sales and work according to the strategy. There are no specific segmentation rules, because... semantics can vary greatly from niche to niche.

Here I will just give some examples based on what criteria semanticists segment cores:

  • by types of keywords:
    • direct demand - they are looking for what we sell, for example, a milling machine;
    • indirect demand - they are looking for a milling machine, and we sell cutters for them;
    • situational - the neighbors flooded, we made repairs;
    • other - navigational, vital requests.
  • by search objects:
    • company, object (for example, repair team);
    • product (repair of milling machines);
    • production, sales (wholesale/retail) (production of spare parts for repairs according to drawings);
    • action on the object (commissioning work);
    • specialist (design engineer);
    • part of the object, subservice (development design documentation for spare parts for a milling machine).
  • on expected checks.

Long tail strategy

Long-tail or the concept of the “long tail” was popularized in 2004 by Wired magazine editor Chris Anderson. The essence of the concept is that the company sells rare goods through a wide assortment for an amount greater than bestsellers.

The concept can be seen using the example of a bookshelf. The store owner, due to limited space, will try to stock only the products that are most popular. If the fashion for a product has already ended, then the place of the book is taken by another one that is gaining popularity.

In online bookstores, the shelf is not limited; the catalog contains all available books. From the studies conducted, it turned out that due to the wide range of books, the sales volume of “unpopular” books exceeds the sales volume of bestsellers. This concept works in the sales of music, films, medicines, etc., and of course when compiling a semantic core.

As with the books example, key search phrases from the long tail can bring in more traffic than high-frequency queries.

From practice, long tail phrases have the highest conversion rate, i.e. people are most likely to be in the purchasing decision stage.

New keywords

If you are an opinion leader, have your own audience and can influence it, try creating new key search phrases around which your content will be built. If the audience picks them up, then you will be the first to appear in search results.

Segmentation and sales funnel

Customer segmentation and role principle

Before collecting keywords, a company needs to find out its target audience, segments and avatars of your customers. To make it clearer, I’ll give you an example: the company sells vibrating plates. Therefore, its target audience will be construction companies, and the main segments will be companies carrying out road work, laying something underground, etc. Avatars are people who make purchasing decisions and search for goods and services.

We will not dwell on this in detail here.

The role principle is that you need to pay attention to the type of people who might be looking for your product, for example it could be an individual, a supplier, an engineer or a CEO. People in different roles may use different keywords. Therefore, knowing your client’s avatar, his behavioral characteristics are taken into account, keywords are selected taking into account the required roles.

For example, if your company's customer is an engineer, his search queries may include specialized technical terms.

Before we begin, it should be noted that each business has its own specific sales funnel. The general concept is discussed here. Consists of 2 parts: propaganda and loyalty.

Sales funnel stages:

  1. Awareness — inform about our product everywhere so that people know about it. This stage includes keywords of a general nature.
  2. Interest— to encourage the consumer to think about how our product will make his life better. At this stage, the benefits and benefits of the product are communicated. The main goal is to create desire for the product.
  3. Studying— the consumer is looking for information to make an informed decision: get acquainted with the professional jargon of the industry, brands appear in search queries, the name of specialized services, etc. The main goal is to convey the benefits and capabilities of the product in as much detail as possible.
  4. Comparison of analogues — the consumer compares similar products. Keywords become specific, indicating that the consumer has a certain level of knowledge.
  5. Purchase— before making a purchase decision, the buyer studies information about prices, guarantees, delivery costs, terms of service, returns, etc. Keywords: low-frequency queries, queries with selling additives.

Keyword Research Tools

Kernel extension algorithm, collection of nested queries

After all the masks have been collected, we move on to collecting key queries in depth.

You need to collect nested queries for:

  • writing relevant advertisements for KS;
  • setting the required rate for a specific CS;
  • installing a relevant link in the ad leading to the required page.

Automated tools for collecting nested queries are software, installed on a PC, online services, browser extensions. There are quite a lot of them, but we use the most popular one - Key Collector - a program that parses keywords and their frequencies, installed on a computer, and also allows you to carry out all the necessary activities to collect the semantic core.

It is advisable to parse each semantic group separately.

The expansion algorithm will be as follows:
  1. parsing masks in Yandex Wordstat;
  2. parsing masks in Google AdWords;
  3. parsing masks in the Bukvariks database;
  4. parsing masks in the Keys.so database;
  5. downloading keywords from Yandex Metrica and Google Analytics;
  6. cleaning and collecting keyword frequencies;
  7. batch collection of search tips;
  8. batch collection of similar search queries from search results;
  9. cleaning and collecting frequencies.

Using the Yandex Wordstat and Google AdWords tools, we will get the main key search phrases that have frequency and popularity in search engines. Bukvariks, Keys.so, downloading KS from Yandex Metrics and Google Analytics, search tips and similar search queries will also give “tail” words from users.

Adaptation of the semantic core for contextual advertising

The preparation algorithm looks like this:

  1. choose selling keywords;
  2. segment the CS;
  3. work on negative keywords and negative phrases;
  4. put operators.

Keywords for YAN and CMS are selected according to a slightly different principle, unlike keywords in search.

Selecting selling keywords

From the existing list of key phrases, we need to understand what a person wants (his needs), what answer he wants to hear to his question. Our task is to answer, in the context of a search, those questions of a person that are interesting to us, i.e. choose the keywords that are most likely to lead to conversions.

In addition, with the help of competent selection of CS, we will reduce non-targeted impressions, which will increase CTR and reduce the cost of a click.

There are situations when the meaning of the request is not clear. In order for us to understand the meaning of what most people want in such cases, it is necessary to enter this query into the search engine and look at the search results. Thanks to machine learning and other search customization technologies Yandex and Google already know what people want for each specific request. All that remains is to analyze the search results and make the right decision. The second way is to view the attachments of the word form in Yandex Wordstat, the third is to think out the meaning, but mark it for further elaboration.

The completeness of the CS is one of the important factors influencing the success of an advertising campaign. Consequently, the future result will depend on the quality of keyword development. In contextual advertising, one should strive not for the volume of the strategic language, but for its high-quality elaboration.

Depending on your goals, you can use a strategy in the future: identify the most conversion queries, test them, and then scale the advertising campaign.

CS segmentation

Some clear segments cannot be selected, because everything varies from niche to niche. Most commerce sites can be segmented based on stages of the buying cycle. Or you can identify some segments yourself by studying your core.

The main task of segmentation is the ability to easily manage the company in the future: set bids and budgets, quickly find an ad and turn on/stop its display, etc.

Working on negative words and phrases

Even at the stage of collecting the semantic core, you collected negative words and phrases. All that remains is to adapt them to your advertising company and conduct a cross-backing track.

Placing operators

Operators are used for HF queries to avoid black competition, as well as to save budget in highly competitive topics and more accurately formulate the phrase. Operators can be combined with each other.

Yandex Direct operators

+word— fixation of stop words, auxiliary parts of speech: prepositions, conjunctions, particles, pronouns, numerals.

!word- fixation of word form.

[word1 word2]— fixation of word order.

-word- word exception.

negative phrases- excluding the phrase, .

Google Operators AdWords: Keyword Match Types

Broad match type — used by default, the ad will be shown by synonym, if there is a typo, by similar phrases and the same intents, for example, for the request “offices in Moscow” it may appear by the keyword “real estate Moscow”.

Broad match modifier — ads will appear for queries containing the “+” sign and their close variants (but not synonyms), located in any order. For example,+ car + Hyundai + Tucsan.

Phrase matching — the ad will appear based on phrases that exactly match the keywords or contain similar words. Sensitive to word order. For example, for the query “prices, Benq monitor” can show an ad for keyword “Benq monitor”.

Exact match — the ad will appear on queries that exactly match the keyword or its close variants. For example, a search for “tire service for trucks” may display an ad for the keyword phrase[ truck tire service] .

Negative words— ads will be shown for queries that do not contain negative keywords.

Adaptation of the semantic core for search engine optimization (SEO)

We will need the kernel to develop a clear logical structure website and complete coverage of the topic (we will describe our topic with certain keywords that are characteristic of it).

The algorithm for preparing a CS for SEO is as follows:

  1. remove information requests from the communication language (keep only commercial ones);

Clustering of the semantic core

Clustering— combining queries into groups based on user intentions, in other words, it is necessary to combine different queries into one group for which a person is looking for the same thing. Requests are distributed into groups so that they can be promoted on the same page (united by user intent).

As an example, you cannot promote informational and commercial requests on the same page. Moreover, it is recommended to promote these queries on different sites.

For example, special clothes - work clothes, zig machine - zigovka - zigovochny machine, circular saw - circular saw - sawing machine.

Clustering can be:

  • manual - grouping occurs manually in some specialized program or Excel. The person carrying out the grouping simply must have a good understanding of the topic, otherwise nothing meaningful will come of it;
  • automatic - grouping occurs automatically based on search results. This method allows you to speed up the ungrouping of the semantic core, consisting of a huge number of key phrases. The group has high accuracy(much more accurately if it was done manually by a person who does not understand the topic). Main advantage this method is to group queries of only the appropriate type, i.e. commercial and informational ones will not be combined into one group (the situation is well illustrated by the queries “smartphone” and “smartphones”: 1st - informational and geo-independent, 2nd - commercial and geo-independent , but “laptop” and “laptops” are both commercial and geo-dependent);
  • semi-automatic - first clusters are created automatically, and then manually grouped. This type of clustering combines both the pros and cons of the first 2.

By type, clustering of the semantic core can be:

For commercial sites, hard clustering is used in most cases. IN special cases you can use middle.

Relevance map

A relevance map is necessary for planning pages and working out the structure of the site. The main elements are:

  • name of the tree element (category, tag, page, etc.);
  • cluster name;
  • cluster keywords;
  • exact frequency (“!key!word”);
  • Title;
  • Description;
  • previous Title;
  • previous H1;
  • previous Description.

To visualize the structure of a website, mind maps are often used.

Adaptation of the semantic core for information sites

Information requests, when viewed from a commercial perspective, are more likely to relate to the next stages of the sales funnel: awareness, interest, study, comparison of analogues. Those. keywords do not directly convert into sales. But based on them, we can inform and influence the buyer’s decision-making.

If we are talking about creating websites to make money from advertising, then it is necessary to specialize in a certain topic and develop it completely. The site should answer all questions on the topic thanks to the competent elaboration of all semantics.

Algorithm for preparing CS for information sites:

  1. remove commercial requests from the ML (keep only informational ones);
  2. carry out clustering of the remaining synonyms;
  3. prepare a relevance map based on the resulting clusters.

As you can see, the algorithm is fundamentally no different from the work of adapting for SEO. The main nuance is the type of clustering. For information sites, choose soft or middle clustering.

Semantic core to order

The cost of the semantic core is determined at the rate of 3-7 rubles. for the keyword. Thus, a clustered semantic core for SEO or an information site with 10,000 keywords will cost an average of 50,000 rubles. Plus, the price will increase if you need to segment keywords for contextual advertising. The price greatly depends on the quality of work. If you are offered cheaper than the specified rates, then you should at least think about why. After all, it sometimes takes up to 16 hours of work to properly design just the masks. If you save on collecting the semantic core (you won’t cover the full scope and depth of the topic), you’ll then lose on contextual advertising (you’ll be shown on the most competitive topics) and won’t get enough customers from the search results.

Here simplest example quality of elaboration of the semantic core: when requesting “creasing machine” you will compete in the search results between 36 competitors, when requesting “creasing machines” - 27 competitors, and “creasing machine” - only 8 competitors.

Request "Zigovochny machine"

Request "Zigovochny machine"



Related publications